modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DevQuasar/PKU-DS-LAB.FairyR1-14B-Preview-GGUF | DevQuasar | 2025-05-28T16:21:57Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:PKU-DS-LAB/FairyR1-14B-Preview",
"base_model:quantized:PKU-DS-LAB/FairyR1-14B-Preview",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-28T13:52:47Z | ---
base_model:
- PKU-DS-LAB/FairyR1-14B-Preview
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [PKU-DS-LAB/FairyR1-14B-Preview](https://huggingface.co/PKU-DS-LAB/FairyR1-14B-Preview)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
BeckerAnas/flowing-yogurt-213 | BeckerAnas | 2025-05-28T16:19:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-28T13:01:03Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: flowing-yogurt-213
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flowing-yogurt-213
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3359
- Accuracy: 0.3841
- Precision: 0.5072
- Recall: 0.3841
- F1: 0.3969
- Roc Auc: 0.6977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 1.4164 | 1.0 | 17 | 1.4507 | 0.0065 | 0.1680 | 0.0065 | 0.0026 | 0.5489 |
| 1.4036 | 2.0 | 34 | 1.4003 | 0.1120 | 0.4643 | 0.1120 | 0.1725 | 0.5986 |
| 1.3866 | 3.0 | 51 | 1.3685 | 0.3516 | 0.4511 | 0.3516 | 0.3868 | 0.6448 |
| 1.3819 | 4.0 | 68 | 1.3547 | 0.4089 | 0.4425 | 0.4089 | 0.4095 | 0.6679 |
| 1.3686 | 5.0 | 85 | 1.3514 | 0.4206 | 0.4565 | 0.4206 | 0.4196 | 0.6797 |
| 1.3566 | 6.0 | 102 | 1.3485 | 0.4115 | 0.4791 | 0.4115 | 0.4149 | 0.6856 |
| 1.3505 | 7.0 | 119 | 1.3492 | 0.3763 | 0.4884 | 0.3763 | 0.4002 | 0.6901 |
| 1.3416 | 8.0 | 136 | 1.3432 | 0.3984 | 0.4683 | 0.3984 | 0.4025 | 0.6934 |
| 1.3411 | 9.0 | 153 | 1.3422 | 0.3789 | 0.4768 | 0.3789 | 0.3883 | 0.6953 |
| 1.3432 | 10.0 | 170 | 1.3400 | 0.375 | 0.5029 | 0.375 | 0.3917 | 0.6958 |
| 1.3286 | 11.0 | 187 | 1.3365 | 0.3854 | 0.4977 | 0.3854 | 0.3983 | 0.6970 |
| 1.33 | 12.0 | 204 | 1.3354 | 0.3893 | 0.5007 | 0.3893 | 0.4007 | 0.6976 |
| 1.3297 | 13.0 | 221 | 1.3360 | 0.3841 | 0.5072 | 0.3841 | 0.3969 | 0.6976 |
| 1.329 | 14.0 | 238 | 1.3359 | 0.3841 | 0.5072 | 0.3841 | 0.3969 | 0.6977 |
| 1.3398 | 15.0 | 255 | 1.3359 | 0.3841 | 0.5072 | 0.3841 | 0.3969 | 0.6977 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cpu
- Datasets 3.6.0
- Tokenizers 0.21.0
|
aleeeeeex/ppo-LunarLander-v2 | aleeeeeex | 2025-05-28T16:18:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T16:18:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.90 +/- 17.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shishirahm3d/ai-lawyer-bd | shishirahm3d | 2025-05-28T16:18:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T15:46:31Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shishirahm3d
- **License:** apache-2.0
|
jinx2321/byt5-tagged-1e4-paper-distilled-133 | jinx2321 | 2025-05-28T16:14:18Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-tagged-1e4-paper",
"base_model:finetune:jinx2321/byt5-tagged-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-28T08:05:59Z | ---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-tagged-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: byt5-tagged-1e4-paper-distilled-133
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-1e4-paper-distilled-133
This model is a fine-tuned version of [jinx2321/byt5-tagged-1e4-paper](https://huggingface.co/jinx2321/byt5-tagged-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
xw17/Llama-3.2-3B-Instruct_finetuned_1_optimized1_oversampling_FT | xw17 | 2025-05-28T16:11:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T16:08:07Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tetttssts/llama_adapter | tetttssts | 2025-05-28T16:06:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T10:48:52Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tetttssts
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oskdabk/MNLP_M2_quantized_model | oskdabk | 2025-05-28T16:06:40Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
]
| text-generation | 2025-05-28T16:05:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
classifiedshadow/model | classifiedshadow | 2025-05-28T16:06:21Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-15T19:42:49Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** classifiedshadow
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AndreiRabau/gpt-car-recommender | AndreiRabau | 2025-05-28T16:06:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T16:05:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gencbeyinlernet/hukuk_model2 | gencbeyinlernet | 2025-05-28T16:04:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T16:04:18Z | ---
license: apache-2.0
---
|
07-Jobz-Hunting-Sajal-Malik-Viral-Videos/link.full.video.sapna.shah.viral.video.original.here.now | 07-Jobz-Hunting-Sajal-Malik-Viral-Videos | 2025-05-28T16:02:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T16:02:09Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
MetaphoricalCode/EVA-Qwen2.5-32B-v0.2-exl3-4.5bpw-hb8 | MetaphoricalCode | 2025-05-28T16:02:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Nopm/Opus_WritingStruct",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts",
"dataset:allura-org/Celeste-1.x-data-mixture",
"dataset:cognitivecomputations/dolphin-2.9.3",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"base_model:quantized:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl3",
"region:us"
]
| text-generation | 2025-05-28T15:47:22Z | ---
library_name: transformers
license: apache-2.0
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
- cognitivecomputations/dolphin-2.9.3
base_model:
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
base_model_relation: quantized
tags:
- generated_from_trainer
model-index:
- name: EVA-Qwen2.5-32B-SFFT-v0.1
results: []
---
## Quantized using the default exllamav3 (0.0.2) quantization process.
- Original model: https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- exllamav3: https://github.com/turboderp-org/exllamav3
---
# EVA Qwen2.5-32B v0.2
<p>
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br>
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
</p>
<p>Dedicated to Nev.</p>
<p><b>Version notes for 0.2</b>: Basically, reprocessed the whole dataset again, due to a severe mistake in previously used pipeline, which left the data poisoned with a lot of non-unicode characters. Now, no more weird generation artifacts, and more stability. Major kudos to Cahvay for his work on fixing this critical issue.</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 1</li>
<li>Min-P: 0.05</li>
<li>Top-A: 0.2</li>
<li>Repetition Penalty: 1.03</li>
</ul>
<h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
</p>
<p>
<br>
<h3>
Training data:
</h3>
<ul>
<li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
<li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
<li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
<li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
<li>Synthstruct and SynthRP datasets by Epiculous</li>
<li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li>
</ul>
<h3>
Training time and hardware:
</h3>
<ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
</p>
<p>Model was created by Kearm, Auri and Cahvay.</p>
<h4>Special thanks:</h4><ul>
<li><b>to Cahvay for his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.</b></li>
<li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
<li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CognitiveComputations for the data</li>
<li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul>
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2.5-32B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/Celeste_Filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.001
output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.1
sequence_len: 10240
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 128
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# mlp.down_proj layers
- model.layers.63.mlp.down_proj
- model.layers.49.mlp.down_proj
- model.layers.48.mlp.down_proj
- model.layers.45.mlp.down_proj
- model.layers.44.mlp.down_proj
- model.layers.47.mlp.down_proj
- model.layers.46.mlp.down_proj
- model.layers.43.mlp.down_proj
- model.layers.8.mlp.down_proj
- model.layers.11.mlp.down_proj
- model.layers.19.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.20.mlp.down_proj
- model.layers.52.mlp.down_proj
- model.layers.39.mlp.down_proj
- model.layers.62.mlp.down_proj
- model.layers.50.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.16.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.53.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.7.mlp.down_proj
- model.layers.36.mlp.down_proj
- model.layers.12.mlp.down_proj
- model.layers.18.mlp.down_proj
- model.layers.37.mlp.down_proj
- model.layers.38.mlp.down_proj
- model.layers.14.mlp.down_proj
- model.layers.13.mlp.down_proj
# mlp.gate_proj layers
- model.layers.43.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.60.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.28.mlp.gate_proj
- model.layers.29.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.37.mlp.gate_proj
- model.layers.35.mlp.gate_proj
- model.layers.59.mlp.gate_proj
- model.layers.36.mlp.gate_proj
- model.layers.30.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.38.mlp.gate_proj
- model.layers.27.mlp.gate_proj
- model.layers.31.mlp.gate_proj
- model.layers.34.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.33.mlp.gate_proj
- model.layers.39.mlp.gate_proj
- model.layers.26.mlp.gate_proj
- model.layers.32.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.42.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.55.mlp.gate_proj
# mlp.up_proj layers
- model.layers.61.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.32.mlp.up_proj
- model.layers.59.mlp.up_proj
- model.layers.58.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.28.mlp.up_proj
- model.layers.35.mlp.up_proj
- model.layers.36.mlp.up_proj
- model.layers.29.mlp.up_proj
- model.layers.31.mlp.up_proj
- model.layers.34.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.30.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.27.mlp.up_proj
- model.layers.51.mlp.up_proj
- model.layers.52.mlp.up_proj
- model.layers.37.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.26.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.50.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.39.mlp.up_proj
# self_attn.k_proj layers
- model.layers.63.self_attn.k_proj
- model.layers.55.self_attn.k_proj
- model.layers.60.self_attn.k_proj
- model.layers.7.self_attn.k_proj
- model.layers.12.self_attn.k_proj
- model.layers.13.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.14.self_attn.k_proj
- model.layers.51.self_attn.k_proj
- model.layers.53.self_attn.k_proj
- model.layers.54.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.61.self_attn.k_proj
- model.layers.18.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.9.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.10.self_attn.k_proj
- model.layers.58.self_attn.k_proj
- model.layers.56.self_attn.k_proj
- model.layers.15.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.8.self_attn.k_proj
- model.layers.59.self_attn.k_proj
- model.layers.11.self_attn.k_proj
- model.layers.48.self_attn.k_proj
- model.layers.16.self_attn.k_proj
- model.layers.50.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.15.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.31.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.34.self_attn.o_proj
- model.layers.33.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.14.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.35.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.36.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.54.self_attn.o_proj
- model.layers.5.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.6.self_attn.o_proj
- model.layers.8.self_attn.o_proj
- model.layers.9.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.45.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.35.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.50.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.56.self_attn.q_proj
- model.layers.58.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.44.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.41.self_attn.q_proj
- model.layers.36.self_attn.q_proj
- model.layers.39.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.43.self_attn.q_proj
- model.layers.34.self_attn.q_proj
- model.layers.46.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.40.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.51.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.37.self_attn.q_proj
- model.layers.53.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.55.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.47.self_attn.v_proj
- model.layers.45.self_attn.v_proj
- model.layers.49.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.7.self_attn.v_proj
- model.layers.44.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.51.self_attn.v_proj
- model.layers.50.self_attn.v_proj
- model.layers.14.self_attn.v_proj
- model.layers.54.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.43.self_attn.v_proj
- model.layers.10.self_attn.v_proj
- model.layers.46.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.22.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.6.self_attn.v_proj
- model.layers.23.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.53.self_attn.v_proj
- model.layers.40.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.9.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.5.self_attn.v_proj
wandb_project: EVA-Qwen2.5-32B-SFFT-v0.2
wandb_entity:
wandb_watch:
wandb_name: Unit-02
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00005
max_grad_norm: 3
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: "unsloth"
# gradient_checkpointing_kwargs:
# use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 4
save_safetensors: true
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: false
# fsdp_offload_params: true
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_activation_checkpointing: true
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: false # Added
# fsdp_backward_prefetch: "BACKWARD_PRE" # Added
# fsdp_backward_prefetch_limit: 1 # Added
# fsdp_mixed_precision: BF16 # Added
```
</details><br> |
filtrado-video-prohibido-18/18.alana.video.alana.foto.viral.alana.flores.foto.viral.alana.flores.telegram | filtrado-video-prohibido-18 | 2025-05-28T15:58:29Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T15:58:20Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
PepitaxX/qwen3-0.6B-openQA_finetune_mmlu_lora64_b_interrupted | PepitaxX | 2025-05-28T15:58:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T15:57:53Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Diamantis99/0trkTuM | Diamantis99 | 2025-05-28T15:56:57Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T15:56:40Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# DeepLabV3Plus Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "mit_b5",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 256,
"decoder_atrous_rates": (12, 24, 36),
"decoder_aspp_separable": True,
"decoder_aspp_dropout": 0.5,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8510048389434814,
"test_dataset_iou": 0.8885248303413391
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
TOMFORD79/X2H9 | TOMFORD79 | 2025-05-28T15:56:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T15:48:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/X2H8 | TOMFORD79 | 2025-05-28T15:55:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T15:48:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mmmanuel/DPO_tulu3_stupid | mmmanuel | 2025-05-28T15:54:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:58:32Z | ---
library_name: transformers
tags:
- unsloth
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DeusImperator/Legion-V2.1-LLaMa-70B_exl3_3.0bpw_H6 | DeusImperator | 2025-05-28T15:53:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Tarek07/Legion-V2.1-LLaMa-70B",
"base_model:quantized:Tarek07/Legion-V2.1-LLaMa-70B",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl3",
"region:us"
]
| text-generation | 2025-05-23T18:12:10Z | ---
base_model: Tarek07/Legion-V2.1-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---
# Legion-V2.1-LLaMa-70B - EXL3 3.0bpw H6
This is a 3bpw EXL3 quant of [Tarek07/Legion-V2.1-LLaMa-70B](https://huggingface.co/Tarek07/Legion-V2.1-LLaMa-70B)
This quant was made using exllamav3-0.0.2
3bpw fits in 32GB VRAM on Windows with around 18-20k Q8 context (tested in tabbyAPI)
I tested this quant shortly in some random RPs (including ones over 8k and 16k context) and it seems to work fine
## Prompt Templates
Uses Llama 3 Instruct format.
### Original readme below
---
~ We are Legion...

My biggest merge yet, consisting of a total of 20 specially curated models. My methodology in approaching this was to create 5 highly specialized models:
- A completely uncensored base
- A very intelligent model based on UGI, Willingness and NatInt scores on the UGI Leaderboard
- A highly descriptive writing model, specializing in creative and natural prose
- A RP model specially merged with fine-tuned models that use a lot of RP datasets
- The secret ingredient: A completely unhinged, uncensored final model
These five models went through a series of iterations until I got something I thought worked well and then combined them to make LEGION.
The full list of models used in this merge is below:
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- Sao10K/Llama-3.3-70B-Vulpecula-r1
- Sao10K/L3-70B-Euryale-v2.1
- SicariusSicariiStuff/Negative_LLAMA_70B
- allura-org/Bigger-Body-70b
- Sao10K/70B-L3.3-mhnnn-x1
- Sao10K/L3.3-70B-Euryale-v2.3
- Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/70B-L3.3-Cirrus-x1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- TheDrummer/Anubis-70B-v1
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- NeverSleep/Lumimaid-v0.2-70B
- mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- ReadyArt/Forgotten-Safeword-70B-3.6
- ReadyArt/Fallen-Abomination-70B-R1-v4.1
- ReadyArt/Fallen-Safeword-70B-R1-v4.1
- huihui-ai/Llama-3.3-70B-Instruct-abliterated
Recommended settings:
```
Temp 1.0
Min P 0.02
```
Because of the nature of this sort of 'Hyper Multi Model Merge', my recommendation is not to run this on anything lower than a Q5 quant.
If you enjoy my work, please consider supporting me, It helps me make more models like this!
[Support on KO-FI <3](https://ko-fi.com/tarek07)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/L-BASE-V1](https://huggingface.co/TareksLab/L-BASE-V1) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/L2-MERGE4](https://huggingface.co/TareksLab/L2-MERGE4)
* [TareksLab/L2-MERGE1](https://huggingface.co/TareksLab/L2-MERGE1)
* [TareksLab/L2-MERGE3](https://huggingface.co/TareksLab/L2-MERGE3)
* [TareksLab/L2-MERGE2a](https://huggingface.co/TareksLab/L2-MERGE2a)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/L2-MERGE2a
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/L2-MERGE4
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/L-BASE-V1
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/L2-MERGE3
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/L2-MERGE1
parameters:
weight: 0.20
density: 0.5
merge_method: dare_ties
base_model: TareksLab/L-BASE-V1
parameters:
normalize: false
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
```
|
postgreser/llama3.2-3B-oig-unsloth | postgreser | 2025-05-28T15:51:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T15:51:36Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** postgreser
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Diamantis99/dWw511n | Diamantis99 | 2025-05-28T15:47:08Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T15:47:05Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# DeepLabV3Plus Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "timm-tf_efficientnet_lite4",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 256,
"decoder_atrous_rates": (12, 24, 36),
"decoder_aspp_separable": True,
"decoder_aspp_dropout": 0.5,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7642907500267029,
"test_dataset_iou": 0.8117832541465759
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
NEW-EXCLUSIVE-TRENDING-VIDEO-LINK/Original.Full.Clip.Katrina.Lim.Viral.Video.Leaks.Official | NEW-EXCLUSIVE-TRENDING-VIDEO-LINK | 2025-05-28T15:46:27Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T15:46:09Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
New-tutorial-Jobz-Hunting-Viral-Videos/Full.Video.Jobz.Hunting.Sajal.Malikr.Viral.Video.Leaked.Official | New-tutorial-Jobz-Hunting-Viral-Videos | 2025-05-28T15:46:25Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T15:46:09Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?viral)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?viral)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?viral) |
Diamantis99/rMNaoLF | Diamantis99 | 2025-05-28T15:43:22Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T15:43:04Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# DeepLabV3Plus Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "timm-efficientnet-b8",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 256,
"decoder_atrous_rates": (12, 24, 36),
"decoder_aspp_separable": True,
"decoder_aspp_dropout": 0.5,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7939274907112122,
"test_dataset_iou": 0.8427851796150208
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
sparse-encoder-testing/SparseEncodder_format_opensearch-neural-sparse-encoding-doc-v2-distill | sparse-encoder-testing | 2025-05-28T15:42:14Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"learned sparse",
"opensearch",
"retrieval",
"passage-retrieval",
"document-expansion",
"bag-of-words",
"en",
"arxiv:2411.04403",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-05-28T15:42:00Z | ---
language: en
license: apache-2.0
tags:
- learned sparse
- opensearch
- transformers
- retrieval
- passage-retrieval
- document-expansion
- bag-of-words
---
# opensearch-neural-sparse-encoding-doc-v2-distill
## Select the model
The model should be selected considering search relevance, model inference and retrieval efficiency(FLOPS). We benchmark models' **zero-shot performance** on a subset of BEIR benchmark: TrecCovid,NFCorpus,NQ,HotpotQA,FiQA,ArguAna,Touche,DBPedia,SCIDOCS,FEVER,Climate FEVER,SciFact,Quora.
Overall, the v2 series of models have better search relevance, efficiency and inference speed than the v1 series. The specific advantages and disadvantages may vary across different datasets.
| Model | Inference-free for Retrieval | Model Parameters | AVG NDCG@10 | AVG FLOPS |
|-------|------------------------------|------------------|-------------|-----------|
| [opensearch-neural-sparse-encoding-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v1) | | 133M | 0.524 | 11.4 |
| [opensearch-neural-sparse-encoding-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v2-distill) | | 67M | 0.528 | 8.3 |
| [opensearch-neural-sparse-encoding-doc-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v1) | ✔️ | 133M | 0.490 | 2.3 |
| [opensearch-neural-sparse-encoding-doc-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill) | ✔️ | 67M | 0.504 | 1.8 |
| [opensearch-neural-sparse-encoding-doc-v2-mini](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-mini) | ✔️ | 23M | 0.497 | 1.7 |
## Overview
- **Paper**: [Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers](https://arxiv.org/abs/2411.04403)
- **Fine-tuning sample**: [opensearch-sparse-model-tuning-sample](https://github.com/zhichao-aws/opensearch-sparse-model-tuning-sample)
This is a learned sparse retrieval model. It encodes the documents to 30522 dimensional **sparse vectors**. For queries, it just use a tokenizer and a weight look-up table to generate sparse vectors. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token. And the similarity score is the inner product of query/document sparse vectors.
The training datasets includes MS MARCO, eli5_question_answer, squad_pairs, WikiAnswers, yahoo_answers_title_question, gooaq_pairs, stackexchange_duplicate_questions_body_body, wikihow, S2ORC_title_abstract, stackexchange_duplicate_questions_title-body_title-body, yahoo_answers_question_answer, searchQA_top5_snippets, stackexchange_duplicate_questions_title_title, yahoo_answers_title_answer.
OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
## Usage (HuggingFace)
This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
```python
import json
import itertools
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
# get sparse vector from dense vectors with shape batch_size * seq_len * vocab_size
def get_sparse_vector(feature, output):
values, _ = torch.max(output*feature["attention_mask"].unsqueeze(-1), dim=1)
values = torch.log(1 + torch.relu(values))
values[:,special_token_ids] = 0
return values
# transform the sparse vector to a dict of (token, weight)
def transform_sparse_vector_to_dict(sparse_vector):
sample_indices,token_indices=torch.nonzero(sparse_vector,as_tuple=True)
non_zero_values = sparse_vector[(sample_indices,token_indices)].tolist()
number_of_tokens_for_each_sample = torch.bincount(sample_indices).cpu().tolist()
tokens = [transform_sparse_vector_to_dict.id_to_token[_id] for _id in token_indices.tolist()]
output = []
end_idxs = list(itertools.accumulate([0]+number_of_tokens_for_each_sample))
for i in range(len(end_idxs)-1):
token_strings = tokens[end_idxs[i]:end_idxs[i+1]]
weights = non_zero_values[end_idxs[i]:end_idxs[i+1]]
output.append(dict(zip(token_strings, weights)))
return output
# download the idf file from model hub. idf is used to give weights for query tokens
def get_tokenizer_idf(tokenizer):
from huggingface_hub import hf_hub_download
local_cached_path = hf_hub_download(repo_id="opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill", filename="idf.json")
with open(local_cached_path) as f:
idf = json.load(f)
idf_vector = [0]*tokenizer.vocab_size
for token,weight in idf.items():
_id = tokenizer._convert_token_to_id_with_added_voc(token)
idf_vector[_id]=weight
return torch.tensor(idf_vector)
# load the model
model = AutoModelForMaskedLM.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill")
tokenizer = AutoTokenizer.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill")
idf = get_tokenizer_idf(tokenizer)
# set the special tokens and id_to_token transform for post-process
special_token_ids = [tokenizer.vocab[token] for token in tokenizer.special_tokens_map.values()]
get_sparse_vector.special_token_ids = special_token_ids
id_to_token = ["" for i in range(tokenizer.vocab_size)]
for token, _id in tokenizer.vocab.items():
id_to_token[_id] = token
transform_sparse_vector_to_dict.id_to_token = id_to_token
query = "What's the weather in ny now?"
document = "Currently New York is rainy."
# encode the query
feature_query = tokenizer([query], padding=True, truncation=True, return_tensors='pt')
input_ids = feature_query["input_ids"]
batch_size = input_ids.shape[0]
query_vector = torch.zeros(batch_size, tokenizer.vocab_size)
query_vector[torch.arange(batch_size).unsqueeze(-1), input_ids] = 1
query_sparse_vector = query_vector*idf
# encode the document
feature_document = tokenizer([document], padding=True, truncation=True, return_tensors='pt')
output = model(**feature_document)[0]
document_sparse_vector = get_sparse_vector(feature_document, output)
# get similarity score
sim_score = torch.matmul(query_sparse_vector[0],document_sparse_vector[0])
print(sim_score) # tensor(17.5307, grad_fn=<DotBackward0>)
query_token_weight = transform_sparse_vector_to_dict(query_sparse_vector)[0]
document_query_token_weight = transform_sparse_vector_to_dict(document_sparse_vector)[0]
for token in sorted(query_token_weight, key=lambda x:query_token_weight[x], reverse=True):
if token in document_query_token_weight:
print("score in query: %.4f, score in document: %.4f, token: %s"%(query_token_weight[token],document_query_token_weight[token],token))
# result:
# score in query: 5.7729, score in document: 1.4109, token: ny
# score in query: 4.5684, score in document: 1.4673, token: weather
# score in query: 3.5895, score in document: 0.7473, token: now
```
The above code sample shows an example of neural sparse search. Although there is no overlap token in original query and document, but this model performs a good match.
## Detailed Search Relevance
<div style="overflow-x: auto;">
| Model | Average | Trec Covid | NFCorpus | NQ | HotpotQA | FiQA | ArguAna | Touche | DBPedia | SCIDOCS | FEVER | Climate FEVER | SciFact | Quora |
|-------|---------|------------|----------|----|----------|------|---------|--------|---------|---------|-------|---------------|---------|-------|
| [opensearch-neural-sparse-encoding-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v1) | 0.524 | 0.771 | 0.360 | 0.553 | 0.697 | 0.376 | 0.508 | 0.278 | 0.447 | 0.164 | 0.821 | 0.263 | 0.723 | 0.856 |
| [opensearch-neural-sparse-encoding-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v2-distill) | 0.528 | 0.775 | 0.347 | 0.561 | 0.685 | 0.374 | 0.551 | 0.278 | 0.435 | 0.173 | 0.849 | 0.249 | 0.722 | 0.863 |
| [opensearch-neural-sparse-encoding-doc-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v1) | 0.490 | 0.707 | 0.352 | 0.521 | 0.677 | 0.344 | 0.461 | 0.294 | 0.412 | 0.154 | 0.743 | 0.202 | 0.716 | 0.788 |
| [opensearch-neural-sparse-encoding-doc-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill) | 0.504 | 0.690 | 0.343 | 0.528 | 0.675 | 0.357 | 0.496 | 0.287 | 0.418 | 0.166 | 0.818 | 0.224 | 0.715 | 0.841 |
| [opensearch-neural-sparse-encoding-doc-v2-mini](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-mini) | 0.497 | 0.709 | 0.336 | 0.510 | 0.666 | 0.338 | 0.480 | 0.285 | 0.407 | 0.164 | 0.812 | 0.216 | 0.699 | 0.837 |
</div>
## License
This project is licensed under the [Apache v2.0 License](https://github.com/opensearch-project/neural-search/blob/main/LICENSE).
## Copyright
Copyright OpenSearch Contributors. See [NOTICE](https://github.com/opensearch-project/neural-search/blob/main/NOTICE) for details. |
denise-mukendi-dusauchoy-hq/Full.Video.Complete.18.denise.dusauchoy.telegram.denise.mukendi.dusauchoy.video.mutakala | denise-mukendi-dusauchoy-hq | 2025-05-28T15:41:07Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T15:40:49Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?viral)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?viral)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?viral) |
ChevellaShyam/model | ChevellaShyam | 2025-05-28T15:39:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T15:36:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
niedamsie/bigasptry2 | niedamsie | 2025-05-28T15:38:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T15:35:06Z | ---
license: apache-2.0
---
|
New-Viral-Arovi-Nusrat-Ridhi-Viral-Video/Original.Full.Clip.Arovi.Nusrat.Ridhi.Viral.Video.Leaks.Official | New-Viral-Arovi-Nusrat-Ridhi-Viral-Video | 2025-05-28T15:35:27Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T15:35:11Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?viral)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?viral)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?viral) |
BootesVoid/cmb82puun0fv2lexp4lztskxx_cmb82wq8e0fyxlexp0oh7ku6q | BootesVoid | 2025-05-28T15:35:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T15:35:21Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KARAN
---
# Cmb82Puun0Fv2Lexp4Lztskxx_Cmb82Wq8E0Fyxlexp0Oh7Ku6Q
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KARAN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KARAN",
"lora_weights": "https://huggingface.co/BootesVoid/cmb82puun0fv2lexp4lztskxx_cmb82wq8e0fyxlexp0oh7ku6q/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb82puun0fv2lexp4lztskxx_cmb82wq8e0fyxlexp0oh7ku6q', weight_name='lora.safetensors')
image = pipeline('KARAN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb82puun0fv2lexp4lztskxx_cmb82wq8e0fyxlexp0oh7ku6q/discussions) to add images that show off what you’ve made with this LoRA.
|
one-girl-one-wolf-link-original/one.girl.one.wolf.viral.video | one-girl-one-wolf-link-original | 2025-05-28T15:34:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T15:34:34Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?viral)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?viral)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?viral) |
BreadTheSire/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF | BreadTheSire | 2025-05-28T15:30:30Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:XGenerationLab/XiYanSQL-QwenCoder-7B-2502",
"base_model:quantized:XGenerationLab/XiYanSQL-QwenCoder-7B-2502",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-28T15:30:08Z | ---
license: apache-2.0
base_model: XGenerationLab/XiYanSQL-QwenCoder-7B-2502
tags:
- llama-cpp
- gguf-my-repo
---
# BreadTheSire/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF
This model was converted to GGUF format from [`XGenerationLab/XiYanSQL-QwenCoder-7B-2502`](https://huggingface.co/XGenerationLab/XiYanSQL-QwenCoder-7B-2502) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/XGenerationLab/XiYanSQL-QwenCoder-7B-2502) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BreadTheSire/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BreadTheSire/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BreadTheSire/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BreadTheSire/XiYanSQL-QwenCoder-7B-2502-Q4_K_M-GGUF --hf-file xiyansql-qwencoder-7b-2502-q4_k_m.gguf -c 2048
```
|
HPLT/hplt2c_nno_checkpoints | HPLT | 2025-05-28T15:28:59Z | 0 | 0 | null | [
"pytorch",
"llama",
"HPLT",
"decoder",
"nn",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T08:49:52Z | ---
language:
- nn
tags:
- HPLT
- decoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
---
# HPLT v2.0 - Cleaned - Norwegian Nynorsk
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the decoder-only language models trained on the cleaned variant of HPLTV2 dataset. [HPLT project](https://hplt-project.org/).
This is one of the decoder-only language models trained on [HPLT2.0_cleaned](https://huggingface.co/datasets/HPLT/HPLT2.0_cleaned).
All the HPLT decoder-only models use the same hyper-parameters, roughly following the llama architecture with 2.15B parameters in total:
- hidden size: 2048
- attention heads: 32
- layers: 24
- sequence length: 2048
## Intermediate checkpoints
We are releasing intermediate checkpoints for each model at intervals of every 1000 training steps in separate branches. The naming convention is `checkpoint_00xxxx00`: for example, `checkpoint_0005000`. The checkpoints range from checkpoint_0001000 to checkpoint_0047684 and the latter is in the main branch.
## Cite us
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
``` |
mradermacher/R1-Code-Interpreter-3B-SFT-GGUF | mradermacher | 2025-05-28T15:28:28Z | 169 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:yongchao98/R1-Code-Interpreter-3B",
"base_model:quantized:yongchao98/R1-Code-Interpreter-3B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-04T20:18:49Z | ---
base_model: yongchao98/R1-Code-Interpreter-3B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/yongchao98/R1-Code-Interpreter-3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Code-Interpreter-3B-SFT-GGUF/resolve/main/R1-Code-Interpreter-3B-SFT.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CodeAtCMU/gemma-3-4b-pt_full_sft_code_data_120K | CodeAtCMU | 2025-05-28T15:26:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T15:22:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/UnslopNemo-12B-v4.1-8bit | mlx-community | 2025-05-28T15:22:47Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"base_model:TheDrummer/UnslopNemo-12B-v4.1",
"base_model:quantized:TheDrummer/UnslopNemo-12B-v4.1",
"8-bit",
"region:us"
]
| null | 2025-05-28T15:21:53Z | ---
base_model: TheDrummer/UnslopNemo-12B-v4.1
tags:
- mlx
---
# mlx-community/UnslopNemo-12B-v4.1-8bit
The Model [mlx-community/UnslopNemo-12B-v4.1-8bit](https://huggingface.co/mlx-community/UnslopNemo-12B-v4.1-8bit) was converted to MLX format from [TheDrummer/UnslopNemo-12B-v4.1](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/UnslopNemo-12B-v4.1-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
CodeAtCMU/gemma-3-4b-pt_full_sft_natural_language_data_120K | CodeAtCMU | 2025-05-28T15:22:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T15:18:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hongxianghugging/DeepSeek-R1-Distill-Llama-8B-FinQA-RL | hongxianghugging | 2025-05-28T15:21:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T03:52:40Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hongxianghugging
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/UnslopNemo-12B-v4.1-bf16 | mlx-community | 2025-05-28T15:20:50Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"base_model:TheDrummer/UnslopNemo-12B-v4.1",
"base_model:finetune:TheDrummer/UnslopNemo-12B-v4.1",
"region:us"
]
| null | 2025-05-28T15:19:37Z | ---
base_model: TheDrummer/UnslopNemo-12B-v4.1
tags:
- mlx
---
# mlx-community/UnslopNemo-12B-v4.1-bf16
The Model [mlx-community/UnslopNemo-12B-v4.1-bf16](https://huggingface.co/mlx-community/UnslopNemo-12B-v4.1-bf16) was converted to MLX format from [TheDrummer/UnslopNemo-12B-v4.1](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/UnslopNemo-12B-v4.1-bf16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Diamantis99/h1yRz8m | Diamantis99 | 2025-05-28T15:18:53Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T15:18:50Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# DeepLabV3Plus Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "mobilenet_v2",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 256,
"decoder_atrous_rates": (12, 24, 36),
"decoder_aspp_separable": True,
"decoder_aspp_dropout": 0.5,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7858009338378906,
"test_dataset_iou": 0.8261166214942932
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
lmcastanedame/Taxi-v3 | lmcastanedame | 2025-05-28T15:18:03Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T15:18:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.85
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lmcastanedame/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Emadb/q-FrozenLake-v1-4x4-noSlippery | Emadb | 2025-05-28T15:16:20Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T15:16:15Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Emadb/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rayonlabs/hf-autotrain-2025-05-28-15-1f41ff88 | rayonlabs | 2025-05-28T15:14:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:rayonlabs/autotrain-data-hf-autotrain-2025-05-28-15-1f41ff88",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T15:09:32Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: unsloth/Meta-Llama-3.1-8B
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- rayonlabs/autotrain-data-hf-autotrain-2025-05-28-15-1f41ff88
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
BootesVoid/cmb81ie0w0fdklexpg9mjsh0b_cmb8272tq0fmflexppeud7i3o | BootesVoid | 2025-05-28T15:10:46Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T15:10:44Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sophie
---
# Cmb81Ie0W0Fdklexpg9Mjsh0B_Cmb8272Tq0Fmflexppeud7I3O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sophie` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "sophie",
"lora_weights": "https://huggingface.co/BootesVoid/cmb81ie0w0fdklexpg9mjsh0b_cmb8272tq0fmflexppeud7i3o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb81ie0w0fdklexpg9mjsh0b_cmb8272tq0fmflexppeud7i3o', weight_name='lora.safetensors')
image = pipeline('sophie').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb81ie0w0fdklexpg9mjsh0b_cmb8272tq0fmflexppeud7i3o/discussions) to add images that show off what you’ve made with this LoRA.
|
asdc/XLM_Temporal_Expression_Normalization | asdc | 2025-05-28T15:09:00Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"es",
"en",
"it",
"fr",
"eu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-02-06T22:26:20Z | ---
tags:
- generated_from_trainer
model-index:
- name: XLM_temporal_expression_normalization
results: []
language:
- es
- en
- it
- fr
- eu
---
# XLM_normalization_BEST_MODEL
This model was trained over the XLM-Large model for temporal expression normalization as a result of the paper "A Novel Methodology for Enhancing
Cross-Language and Domain Adaptability in Temporal Expression Normalization"
## Model description
More information needed
## Intended uses & limitations
This model requires from extra post-processing. The proper code can be found at "https://github.com/asdc-s5/Temporal-expression-normalization-with-fill-mask"
## Training and evaluation data
All the information about training, evaluation and benchmarking can be found in the paper "A Novel Methodology for Enhancing
Cross-Language and Domain Adaptability in Temporal Expression Normalization"
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
5SAGI/NIPS2025 | 5SAGI | 2025-05-28T15:08:23Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-20T06:51:43Z | ---
license: apache-2.0
---
|
HosenM13/Hosen | HosenM13 | 2025-05-28T15:07:24Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2025-05-28T15:07:24Z | ---
license: bigscience-bloom-rail-1.0
---
|
Master-thesis-NAP/nomicAI-ModernBERT-base-finetuned | Master-thesis-NAP | 2025-05-28T15:06:33Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:79876",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-28T15:05:51Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:79876
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: What is the error estimate for the difference between the exact
solution and the local oscillation decomposition (LOD) solution in terms of the
$L_0$ norm?
sentences:
- '\label{thm1}
Suppose $\kappa$ and $\bar a$ are as above. Then $|\Pcut(\bar a)| \leq 2^\kappa$.
Indeed if
$2^\kappa=\aleph_\alpha,$ then $|\Pcut(\bar a)| \leq |\alpha+1|^2$.'
- "\\cite{kyushu}\n For every discrete group $\\G$ and every 2-dimensional representation\
\ $\\varrho$ of $\\G$, $\\varrho-$equivariant functions for $\\G$ always exist."
- "\\label{Corollary}\n Let Assumptions~\\ref{assum_1} and~\\ref{assump2} be\
\ satisfied. Let $u$ be the solution of~\\eqref{WeakForm} and let $u_{H,k}$ be\
\ the LOD solution of~\\eqref{local_probelm }. Then we have \n \\begin{equation}\\\
label{L2Estimate}\n \\|u-I_Hu_{H,k}\\|_0\\lesssim \\|u-I_Hu\\|_0+\\|u-u_{H,k}\\\
|_0 +H|u-u_{H,k}|_1.\n \\end{equation}\n %\\[\\|u-I_Hu_{H,k}\\|_0\\lesssim\
\ H |u|_1 +|u-u_{H,k}|_1.\\]"
- source_sentence: Does the theorem imply that the rate of convergence of the sequence
$T_{m,j}(E)$ to $T_{m+k_n,j+k_n}(E)$ is exponential in the distance between $m$
and $j$, and that this rate is bounded by a constant $C$ times an exponential
decay factor involving the parameter $\gamma$?
sentences:
- "\\label{thm:weibull}\nSuppose random variable $X$ follows Weibull distribution,\
\ and $E(X^i)$ denotes the $i$-th moment of $X$. Then the random variable $X$\
\ satisfy the following inequality: \n\\begin{equation}\\label{eq:moments}\n \
\ E(X^n)^{\\frac{1}{n}} \\geq E(X^m)^{\\frac{1}{m}},\n\\end{equation}\nwhere\
\ $n > m$."
- "\\label{lem1}\n\t\tFor all $m,j\\in\\Z$, we have\n\t\t\\begin{equation*}\n\t\
\t|| T_{m,j} (E)-T_{m+k_n,j+k_n}(E)||\\leq C e^{-\\gamma k_n} e^{(\\mathcal\
\ L(E)+\\varepsilon) |m-j|}. \n\t\t\\end{equation*}"
- If the problem \eqref{eq:Model-based_Program} is convex, then under the primal-dual
dynamics \eqref{eq:PDD}-\eqref{eq:AlgebraicConstruction}, the system \eqref{eq:Input-OutputMap}
asymptotically converges to a steady state that is the optimal solution of \eqref{eq:Model-based_Program}.
- source_sentence: What is the rate of convergence for the total error in the given
problem, assuming the conditions in Theorem~\ref{convergence-rates} are met?
sentences:
- "\\label{convergence-rates}\nUnder the assumptions of Theorem~\\ref{well-posedness}.\
\ Given $(\\bu,{p},\\bzeta,\\varphi)\\in (\\bH^{s_1+1}(\\Omega)\\cap \\bV_1)\\\
times (\\text{H}^{s_1}(\\Omega)\\cap Q_{b_1}) \\times (\\bH^{s_2}\\cap \\bV_2)\
\ \\times (\\text{H}^{s_2}\\cap Q_{b_2})$, $(\\bu_h,{p}_h,\\bzeta_h,\\varphi_h)\\\
in \\bV_1^{h,k_1}\\times Q_1^{h,k_1}\\times \\bV_2^{h,k_2}\\times Q_2^{h,k_2}$\
\ be the respective solutions of the continuous and discrete problems, with the\
\ data satisfying $\\fb\\in \\bH^{s_1-1}\\cap \\bQ_{b_1}$ and $g\\in H^{s_2}(\\\
Omega)\\cap Q_{b_2}$. If $\\overline{C}_1 \\sqrt{M} L_\\ell + \\overline{C}_2^2\
\ \\sqrt{M^3} L_\\bbM\\sqrt{2\\mu} (\\norm{\\varphi_D}_{1/2,\\Gamma_D} + \\\
norm{g}_{0,\\Omega}) < 1/2.$ Then, the total error $\\overline{\\textnormal{e}}_h:=\\\
norm{(\\bu-\\bu_h,{p}-{p}_h, \\bzeta-\\bzeta_h,\\varphi-\\varphi_h)}_{\\bV_1\\\
times Q_{1} \\times \\bV_2\\times Q_2}$ decays with the following rate for $s:=\
\ \\min \\left\\{s_1,s_2\\right\\}$\n \\begin{align*}\\label{convergence-rate}\n\
\ \\overline{\\textnormal{e}}_h &\\lesssim h^{ s} (|\\fb|_{s_1-1,\\bQ_{b_1}}\
\ + |\\bu|_{s_1+1,\\bV_1} + |{p}|_{s_1,Q_{b_1}} + |g|_{s_2,Q_{b_2}} + |\\bzeta|_{s_2,\\\
bV_2}+|\\varphi|_{s_2,Q_{b_2}}).\n \\end{align*}"
- "\\label{thm}\nFor vector linear secure aggregation defined above, the optimal\
\ total key rate is \n\\begin{eqnarray}\n R_{Z_{\\Sigma}}^* %= \\left\\{R_{Z_{\\\
Sigma}}: R_{Z_{\\Sigma}} \\geq \n = \\mbox{rank} \\left( \\left[ \\mathbf{F}\
\ ; \\mathbf{G} \\right] \\right)\n - \\mbox{rank} \\left( \\mathbf{F} \\\
right) = \\mbox{rank}({\\bf G} | {\\bf F}).\n %\\right\\}.\n% \\\\ \\\
mbox{rank}\n\\end{eqnarray}"
- "The process $Y(t)$, $t\\geq 0,$ is called Markov branching process with\r\nnon-homogeneous\
\ Poisson immigration (MBPNPI)."
- source_sentence: Is the local time of the horizontal component of the Peano curve
ever greater than 1?
sentences:
- "[Divergence Theorem or Gauss-Green Theorem for Surfaces in $\\R^3$]\n\t\\label{thm:surface_int}\n\
\t Let $\\Sigma \\subset \\Omega\\subseteq\\R^3$ be a bounded smooth surface.\n\
\t Further, $\\bb a:\\Sigma\\to\\R^3$ is a continuously differentiable\
\ vector field that is either defined on the\n\t\t\t\t\tboundary $\\partial\\\
Sigma$ or has a bounded continuous extension to this boundary.\n\t Like\
\ in \\eqref{eq:decomp} it may be decomposed into tangential and normal components\n\
\t\t\t\t\tas follows $\\bb a = \\bb a^\\shortparallel + a_\\nu\\bs\\nu_\\Sigma$.\
\ By $\\dd l$ we denote the line element on \n\t\t\t\t\tthe curve $\\partial \\\
Sigma$. We assume that the curve is continuous and consists of finitely many\n\
\t\t\t\t\tsmooth pieces.\n\t Then the following divergence formula for\
\ surface integrals holds\n\t %\n\t \\begin{align}\n\t \
\ %\n\t \\int\\limits_\\Sigma \\left[\\nabla_\\Sigma\\cdot\\bb a^\\\
shortparallel\\right](\\x)\\;\\dd S\n\t\t\t\t\t\t\t= \\int\\limits_{\\partial\\\
Sigma} \\left[\\bb a\\cdot\\bs\\nu_{\\partial\\Sigma}\\right](\\x)\\,\\dd l .\n\
\t \\label{eq:surface_div}\n\t %\n\t \\end{align}\n\
\t\t\t\t\t%\n\t\t\t\t\tFrom this we obtain the formula\n\t\t\t\t\t%\n\t \
\ \\begin{align}\n\t %\n\t \\int\\limits_\\Sigma \\left[\\\
nabla_\\Sigma\\cdot\\bb a\\right](\\x)\\;\\dd S\n\t\t\t\t\t\t\t= \\int\\limits_{\\\
partial\\Sigma} \\left[\\bb a\\cdot\\bs\\nu_{\\partial\\Sigma}\\right](\\x)\\\
,\\dd l \n\t\t\t\t\t\t\t-\\int\\limits_\\Sigma\\left[ 2\\kappa_Ma_\\nu\\right](\\\
x)\\;\\dd S.\n\t \\label{eq:surface_div_2}\n\t %\n\t \
\ \\end{align}\n\t %"
- There exists local time of the horizontal component $x$ of the Peano curve. Moreover,
this local time attains values no greater than $1$.
- "[Werner-Young's inequality]\\label{Young op-op}\nSuppose $S\\in \\cS^p$ and $T\\\
in \\cS^q$ with $1+r^{-1}=p^{-1}+q^{-1}$.\nThen $S\\star T\\in L^r(\\R^{2d})$\
\ and\n\\begin{align*}\n \\|S\\star T\\|_{L^{r}}\\leq \\|S\\|_{\\cS^p}\\|T\\\
|_{\\cS^q}.\n\\end{align*}"
- source_sentence: What is the meaning of the identity containment $1_x:x\to x$ in
the context of the bond system?
sentences:
- "\\label{lem:opt_lin}\nConsider the optimization problem\n\\begin{equation}\\\
label{eq:max_tr_lem}\n\\begin{aligned}\n \\max_{\\bs{U}}&\\;\\; \\Re\\{\\mrm{tr}(\\\
bs{U}^\\mrm{H}\\bs{B}) \\}\\\\\n \\mrm{s.t. \\;\\;}& \\bs{U}\\in \\mathcal{U}(N),\n\
\\end{aligned}\n\\end{equation}\nwhere $\\bs{B}$ may be an arbitrary $N\\times\
\ N$ matrix with singular value decomposition (SVD) $\\bs{B}=\\bs{U}_{\\bs{B}}\\\
bs{S}_{\\bs{B}}\\bs{V}_{\\bs{B}}^\\mrm{H}$. The solution to \\eqref{eq:max_tr_lem}\
\ is given by\n\\begin{equation}\\label{eq:sol_max}\n \\bs{U}_\\mrm{opt} =\
\ \\bs{U}_{\\bs{B}}^\\mrm{H}\\bs{V}_{\\bs{B}}.\n\\end{equation}\n\\begin{skproof}\n\
\ A formal proof, which may be included in the extended version, can be obtained\
\ by defining the Riemannian gradient over the unitary group and finding the stationary\
\ point where it vanishes. However, an intuitive argument is that the solution\
\ to \\eqref{eq:max_tr_lem} is obtained by positively combining the singular values\
\ of $\\bs{B}$, leading to \\eqref{eq:sol_max}.\n\\end{skproof}"
- '\label{AM_BA_lem1}
Let $$\Omega =\left\{a={{\left(k_1x_1+k_2,\dots,k_1x_n+k_2\right)}}\mid k_1, k_2\in
\mathbb{R}\right\} .$$ Then ${\displaystyle\underset{a\in \Omega}{\operatorname{argmin}}
{J_{\alpha }}(a)=\overline{a}\ },$ where $\overline{a}=\left(\overline{a}_1,\dots,\overline{a}_n\right)$,
$$\overline{a}_i=\frac{1}{n}\sum^n_{j =1}{y_j},\quad\forall i=1,\dots,n.$$ In
other words, on the class of lines $J_{\alpha }\left(a\right)$ reaches a minimum
on a straight line parallel to the $Ox$ axis. So, this is the average line for
the ordinates of all points of set $X$.'
- "A \\emph{bond system} is a tuple $(B,C,s,t,1,\\cdot)$, where $B$ is a set of\
\ \\emph{bonds}, $C$ is a set of \\emph{content} relations, and $s,t:C\\to B$\
\ are \\emph{source} and \\emph{target} functions. For $c\\in C$ with $s(c)=x$\
\ and $t(c)=y$, we write $x\\xrightarrow{c}y$ or $c:x\\to y$, indicating that\
\ $x$ \\emph{contains} $y$. Each bond $x\\in B$ has an \\emph{identity} containment\
\ $1_x:x\\to x$, meaning every bond trivially contains itself. For $c:x\\to y$\
\ and $c':y\\to z$, their composition is $cc':x\\to z$. These data must satisfy:\n\
\ \\begin{enumerate}\n \\item Identity laws: For each $c:x\\to y$, $1_x\
\ c= c=c1_y$\n \\item Associativity: For $c:x\\to y$, $c':y\\to z$, $c'':z\\\
to w$, $c(c'c'')=(cc')c''$\n \\item Anti-symmetry: For $c:x\\to y$ and\
\ $c':y\\to x$, $x=y$\n \\item Left cancellation: For $c,c':x\\to y$ and\
\ $c'':y\\to z$, if $cc''=c'c''$, then $c=c'$\n \\end{enumerate}"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on nomic-ai/modernbert-embed-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: TESTING
type: TESTING
metrics:
- type: cosine_accuracy@1
value: 0.912782648823258
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9455468389478542
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9544300876788187
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9657360406091371
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.912782648823258
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.6628211044454699
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.5434010152284263
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.391439778495616
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.04422775803649693
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.09051899363388177
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.1184143319141888
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.15929812953578346
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4952481992397162
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9310623457197055
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.1922204132130754
name: Cosine Map@100
---
# SentenceTransformer based on nomic-ai/modernbert-embed-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Master-thesis-NAP/nomicAI-ModernBERT-base-finetuned")
# Run inference
sentences = [
'What is the meaning of the identity containment $1_x:x\\to x$ in the context of the bond system?',
"A \\emph{bond system} is a tuple $(B,C,s,t,1,\\cdot)$, where $B$ is a set of \\emph{bonds}, $C$ is a set of \\emph{content} relations, and $s,t:C\\to B$ are \\emph{source} and \\emph{target} functions. For $c\\in C$ with $s(c)=x$ and $t(c)=y$, we write $x\\xrightarrow{c}y$ or $c:x\\to y$, indicating that $x$ \\emph{contains} $y$. Each bond $x\\in B$ has an \\emph{identity} containment $1_x:x\\to x$, meaning every bond trivially contains itself. For $c:x\\to y$ and $c':y\\to z$, their composition is $cc':x\\to z$. These data must satisfy:\n \\begin{enumerate}\n \\item Identity laws: For each $c:x\\to y$, $1_x c= c=c1_y$\n \\item Associativity: For $c:x\\to y$, $c':y\\to z$, $c'':z\\to w$, $c(c'c'')=(cc')c''$\n \\item Anti-symmetry: For $c:x\\to y$ and $c':y\\to x$, $x=y$\n \\item Left cancellation: For $c,c':x\\to y$ and $c'':y\\to z$, if $cc''=c'c''$, then $c=c'$\n \\end{enumerate}",
'\\label{lem:opt_lin}\nConsider the optimization problem\n\\begin{equation}\\label{eq:max_tr_lem}\n\\begin{aligned}\n \\max_{\\bs{U}}&\\;\\; \\Re\\{\\mrm{tr}(\\bs{U}^\\mrm{H}\\bs{B}) \\}\\\\\n \\mrm{s.t. \\;\\;}& \\bs{U}\\in \\mathcal{U}(N),\n\\end{aligned}\n\\end{equation}\nwhere $\\bs{B}$ may be an arbitrary $N\\times N$ matrix with singular value decomposition (SVD) $\\bs{B}=\\bs{U}_{\\bs{B}}\\bs{S}_{\\bs{B}}\\bs{V}_{\\bs{B}}^\\mrm{H}$. The solution to \\eqref{eq:max_tr_lem} is given by\n\\begin{equation}\\label{eq:sol_max}\n \\bs{U}_\\mrm{opt} = \\bs{U}_{\\bs{B}}^\\mrm{H}\\bs{V}_{\\bs{B}}.\n\\end{equation}\n\\begin{skproof}\n A formal proof, which may be included in the extended version, can be obtained by defining the Riemannian gradient over the unitary group and finding the stationary point where it vanishes. However, an intuitive argument is that the solution to \\eqref{eq:max_tr_lem} is obtained by positively combining the singular values of $\\bs{B}$, leading to \\eqref{eq:sol_max}.\n\\end{skproof}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `TESTING`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9128 |
| cosine_accuracy@3 | 0.9455 |
| cosine_accuracy@5 | 0.9544 |
| cosine_accuracy@10 | 0.9657 |
| cosine_precision@1 | 0.9128 |
| cosine_precision@3 | 0.6628 |
| cosine_precision@5 | 0.5434 |
| cosine_precision@10 | 0.3914 |
| cosine_recall@1 | 0.0442 |
| cosine_recall@3 | 0.0905 |
| cosine_recall@5 | 0.1184 |
| cosine_recall@10 | 0.1593 |
| **cosine_ndcg@10** | **0.4952** |
| cosine_mrr@10 | 0.9311 |
| cosine_map@100 | 0.1922 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 79,876 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 38.48 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 210.43 tokens</li><li>max: 924 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the limit of the proportion of 1's in the sequence $a_n$ as $n$ approaches infinity, given that $0 \leq 3g_n -2n \leq 4$?</code> | <code>Let $g_n$ be the number of $1$'s in the sequence $a_1 a_2 \cdots a_n$.<br>Then <br>\begin{equation}<br>0 \leq 3g_n -2n \leq 4<br>\label{star}<br>\end{equation}<br>for all $n$, and hence<br>$\lim_{n \rightarrow \infty} g_n/n = 2/3$.<br>\label{thm1}</code> |
| <code>Does the statement of \textbf{ThmConjAreTrue} imply that the maximum genus of a locally Cohen-Macaulay curve in $\mathbb{P}^3_{\mathbb{C}}$ of degree $d$ that does not lie on a surface of degree $s-1$ is always equal to $g(d,s)$?</code> | <code>\label{ThmConjAreTrue}<br>Conjectures \ref{Conj1} and \ref{Conj2} are true.<br>As a consequence, <br>if either $d=s \geq 1$ or $d \geq 2s+1 \geq 3$, <br>the maximum genus of a locally Cohen-Macaulay curve in $\mathbb{P}^3_{\mathbb{C}}$ of degree $d$ that does not lie on a surface of degree $s-1$ is equal to $g(d,s)$.</code> |
| <code>\\emph{Is the statement \emph{If $X$ is a compact Hausdorff space, then $X$ is normal}, proven in the first isomorphism theorem for topological groups, or is it a well-known result in topology?}</code> | <code>}<br>\newcommand{\ep}{</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | TESTING_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:----------------------:|
| -1 | -1 | - | 0.4132 |
| 0.0160 | 10 | 1.6404 | - |
| 0.0320 | 20 | 1.3314 | - |
| 0.0481 | 30 | 1.0877 | - |
| 0.0641 | 40 | 0.6526 | - |
| 0.0801 | 50 | 0.6434 | - |
| 0.0961 | 60 | 0.444 | - |
| 0.1122 | 70 | 0.3879 | - |
| 0.1282 | 80 | 0.2864 | - |
| 0.1442 | 90 | 0.3902 | - |
| 0.1602 | 100 | 0.3298 | - |
| 0.1762 | 110 | 0.2506 | - |
| 0.1923 | 120 | 0.2625 | - |
| 0.2083 | 130 | 0.2319 | - |
| 0.2243 | 140 | 0.3075 | - |
| 0.2403 | 150 | 0.2417 | - |
| 0.2564 | 160 | 0.2789 | - |
| 0.2724 | 170 | 0.2741 | - |
| 0.2884 | 180 | 0.1999 | - |
| 0.3044 | 190 | 0.2293 | - |
| 0.3204 | 200 | 0.2061 | - |
| 0.3365 | 210 | 0.2926 | - |
| 0.3525 | 220 | 0.2226 | - |
| 0.3685 | 230 | 0.2736 | - |
| 0.3845 | 240 | 0.2361 | - |
| 0.4006 | 250 | 0.25 | - |
| 0.4166 | 260 | 0.1745 | - |
| 0.4326 | 270 | 0.1932 | - |
| 0.4486 | 280 | 0.1864 | - |
| 0.4647 | 290 | 0.1804 | - |
| 0.4807 | 300 | 0.175 | - |
| 0.4967 | 310 | 0.1716 | - |
| 0.5127 | 320 | 0.1698 | - |
| 0.5287 | 330 | 0.1706 | - |
| 0.5448 | 340 | 0.1345 | - |
| 0.5608 | 350 | 0.1818 | - |
| 0.5768 | 360 | 0.25 | - |
| 0.5928 | 370 | 0.2521 | - |
| 0.6089 | 380 | 0.1546 | - |
| 0.6249 | 390 | 0.1987 | - |
| 0.6409 | 400 | 0.174 | - |
| 0.6569 | 410 | 0.0936 | - |
| 0.6729 | 420 | 0.162 | - |
| 0.6890 | 430 | 0.1463 | - |
| 0.7050 | 440 | 0.2174 | - |
| 0.7210 | 450 | 0.167 | - |
| 0.7370 | 460 | 0.1563 | - |
| 0.7531 | 470 | 0.1733 | - |
| 0.7691 | 480 | 0.1236 | - |
| 0.7851 | 490 | 0.1436 | - |
| 0.8011 | 500 | 0.1246 | - |
| 0.8171 | 510 | 0.1339 | - |
| 0.8332 | 520 | 0.1118 | - |
| 0.8492 | 530 | 0.2402 | - |
| 0.8652 | 540 | 0.1526 | - |
| 0.8812 | 550 | 0.1555 | - |
| 0.8973 | 560 | 0.1195 | - |
| 0.9133 | 570 | 0.1209 | - |
| 0.9293 | 580 | 0.1152 | - |
| 0.9453 | 590 | 0.2408 | - |
| 0.9613 | 600 | 0.1411 | - |
| 0.9774 | 610 | 0.1856 | - |
| 0.9934 | 620 | 0.121 | - |
| 1.0 | 625 | - | 0.4754 |
| 1.0080 | 630 | 0.1459 | - |
| 1.0240 | 640 | 0.1075 | - |
| 1.0401 | 650 | 0.0629 | - |
| 1.0561 | 660 | 0.065 | - |
| 1.0721 | 670 | 0.0766 | - |
| 1.0881 | 680 | 0.0892 | - |
| 1.1041 | 690 | 0.0849 | - |
| 1.1202 | 700 | 0.0834 | - |
| 1.1362 | 710 | 0.0861 | - |
| 1.1522 | 720 | 0.0884 | - |
| 1.1682 | 730 | 0.0937 | - |
| 1.1843 | 740 | 0.1073 | - |
| 1.2003 | 750 | 0.0946 | - |
| 1.2163 | 760 | 0.0898 | - |
| 1.2323 | 770 | 0.0404 | - |
| 1.2483 | 780 | 0.0742 | - |
| 1.2644 | 790 | 0.0499 | - |
| 1.2804 | 800 | 0.0817 | - |
| 1.2964 | 810 | 0.0779 | - |
| 1.3124 | 820 | 0.0748 | - |
| 1.3285 | 830 | 0.0617 | - |
| 1.3445 | 840 | 0.0386 | - |
| 1.3605 | 850 | 0.097 | - |
| 1.3765 | 860 | 0.0639 | - |
| 1.3925 | 870 | 0.0446 | - |
| 1.4086 | 880 | 0.0711 | - |
| 1.4246 | 890 | 0.0571 | - |
| 1.4406 | 900 | 0.0639 | - |
| 1.4566 | 910 | 0.046 | - |
| 1.4727 | 920 | 0.1049 | - |
| 1.4887 | 930 | 0.0863 | - |
| 1.5047 | 940 | 0.0701 | - |
| 1.5207 | 950 | 0.088 | - |
| 1.5368 | 960 | 0.0513 | - |
| 1.5528 | 970 | 0.0583 | - |
| 1.5688 | 980 | 0.0934 | - |
| 1.5848 | 990 | 0.0772 | - |
| 1.6008 | 1000 | 0.1038 | - |
| 1.6169 | 1010 | 0.0941 | - |
| 1.6329 | 1020 | 0.0629 | - |
| 1.6489 | 1030 | 0.067 | - |
| 1.6649 | 1040 | 0.073 | - |
| 1.6810 | 1050 | 0.1085 | - |
| 1.6970 | 1060 | 0.0801 | - |
| 1.7130 | 1070 | 0.069 | - |
| 1.7290 | 1080 | 0.0615 | - |
| 1.7450 | 1090 | 0.0725 | - |
| 1.7611 | 1100 | 0.0778 | - |
| 1.7771 | 1110 | 0.077 | - |
| 1.7931 | 1120 | 0.0513 | - |
| 1.8091 | 1130 | 0.061 | - |
| 1.8252 | 1140 | 0.0589 | - |
| 1.8412 | 1150 | 0.0526 | - |
| 1.8572 | 1160 | 0.0517 | - |
| 1.8732 | 1170 | 0.056 | - |
| 1.8892 | 1180 | 0.0639 | - |
| 1.9053 | 1190 | 0.0785 | - |
| 1.9213 | 1200 | 0.0769 | - |
| 1.9373 | 1210 | 0.0765 | - |
| 1.9533 | 1220 | 0.0777 | - |
| 1.9694 | 1230 | 0.0728 | - |
| 1.9854 | 1240 | 0.082 | - |
| 2.0 | 1250 | 0.063 | 0.4855 |
| 2.0160 | 1260 | 0.0223 | - |
| 2.0320 | 1270 | 0.0401 | - |
| 2.0481 | 1280 | 0.039 | - |
| 2.0641 | 1290 | 0.0303 | - |
| 2.0801 | 1300 | 0.0323 | - |
| 2.0961 | 1310 | 0.0271 | - |
| 2.1122 | 1320 | 0.0375 | - |
| 2.1282 | 1330 | 0.0516 | - |
| 2.1442 | 1340 | 0.0302 | - |
| 2.1602 | 1350 | 0.036 | - |
| 2.1762 | 1360 | 0.0282 | - |
| 2.1923 | 1370 | 0.0288 | - |
| 2.2083 | 1380 | 0.038 | - |
| 2.2243 | 1390 | 0.0213 | - |
| 2.2403 | 1400 | 0.035 | - |
| 2.2564 | 1410 | 0.0339 | - |
| 2.2724 | 1420 | 0.0359 | - |
| 2.2884 | 1430 | 0.0258 | - |
| 2.3044 | 1440 | 0.0254 | - |
| 2.3204 | 1450 | 0.0278 | - |
| 2.3365 | 1460 | 0.0348 | - |
| 2.3525 | 1470 | 0.0223 | - |
| 2.3685 | 1480 | 0.032 | - |
| 2.3845 | 1490 | 0.0236 | - |
| 2.4006 | 1500 | 0.0366 | - |
| 2.4166 | 1510 | 0.0368 | - |
| 2.4326 | 1520 | 0.0307 | - |
| 2.4486 | 1530 | 0.036 | - |
| 2.4647 | 1540 | 0.0369 | - |
| 2.4807 | 1550 | 0.0379 | - |
| 2.4967 | 1560 | 0.0333 | - |
| 2.5127 | 1570 | 0.031 | - |
| 2.5287 | 1580 | 0.034 | - |
| 2.5448 | 1590 | 0.0242 | - |
| 2.5608 | 1600 | 0.0297 | - |
| 2.5768 | 1610 | 0.027 | - |
| 2.5928 | 1620 | 0.0271 | - |
| 2.6089 | 1630 | 0.038 | - |
| 2.6249 | 1640 | 0.0244 | - |
| 2.6409 | 1650 | 0.0325 | - |
| 2.6569 | 1660 | 0.0352 | - |
| 2.6729 | 1670 | 0.0179 | - |
| 2.6890 | 1680 | 0.0291 | - |
| 2.7050 | 1690 | 0.0355 | - |
| 2.7210 | 1700 | 0.0271 | - |
| 2.7370 | 1710 | 0.049 | - |
| 2.7531 | 1720 | 0.0231 | - |
| 2.7691 | 1730 | 0.023 | - |
| 2.7851 | 1740 | 0.0301 | - |
| 2.8011 | 1750 | 0.0262 | - |
| 2.8171 | 1760 | 0.0281 | - |
| 2.8332 | 1770 | 0.0282 | - |
| 2.8492 | 1780 | 0.0375 | - |
| 2.8652 | 1790 | 0.0486 | - |
| 2.8812 | 1800 | 0.0185 | - |
| 2.8973 | 1810 | 0.0183 | - |
| 2.9133 | 1820 | 0.0362 | - |
| 2.9293 | 1830 | 0.0245 | - |
| 2.9453 | 1840 | 0.0322 | - |
| 2.9613 | 1850 | 0.0568 | - |
| 2.9774 | 1860 | 0.0321 | - |
| 2.9934 | 1870 | 0.0253 | - |
| 3.0 | 1875 | - | 0.4920 |
| 3.0080 | 1880 | 0.0179 | - |
| 3.0240 | 1890 | 0.0192 | - |
| 3.0401 | 1900 | 0.0146 | - |
| 3.0561 | 1910 | 0.0215 | - |
| 3.0721 | 1920 | 0.0316 | - |
| 3.0881 | 1930 | 0.035 | - |
| 3.1041 | 1940 | 0.0164 | - |
| 3.1202 | 1950 | 0.0269 | - |
| 3.1362 | 1960 | 0.0197 | - |
| 3.1522 | 1970 | 0.0175 | - |
| 3.1682 | 1980 | 0.0154 | - |
| 3.1843 | 1990 | 0.0191 | - |
| 3.2003 | 2000 | 0.0242 | - |
| 3.2163 | 2010 | 0.0286 | - |
| 3.2323 | 2020 | 0.0328 | - |
| 3.2483 | 2030 | 0.0237 | - |
| 3.2644 | 2040 | 0.0284 | - |
| 3.2804 | 2050 | 0.0214 | - |
| 3.2964 | 2060 | 0.0169 | - |
| 3.3124 | 2070 | 0.0268 | - |
| 3.3285 | 2080 | 0.042 | - |
| 3.3445 | 2090 | 0.024 | - |
| 3.3605 | 2100 | 0.0133 | - |
| 3.3765 | 2110 | 0.0232 | - |
| 3.3925 | 2120 | 0.0171 | - |
| 3.4086 | 2130 | 0.026 | - |
| 3.4246 | 2140 | 0.0241 | - |
| 3.4406 | 2150 | 0.0159 | - |
| 3.4566 | 2160 | 0.0194 | - |
| 3.4727 | 2170 | 0.0243 | - |
| 3.4887 | 2180 | 0.0181 | - |
| 3.5047 | 2190 | 0.0252 | - |
| 3.5207 | 2200 | 0.0264 | - |
| 3.5368 | 2210 | 0.0182 | - |
| 3.5528 | 2220 | 0.0403 | - |
| 3.5688 | 2230 | 0.0247 | - |
| 3.5848 | 2240 | 0.0191 | - |
| 3.6008 | 2250 | 0.0225 | - |
| 3.6169 | 2260 | 0.0231 | - |
| 3.6329 | 2270 | 0.0154 | - |
| 3.6489 | 2280 | 0.0227 | - |
| 3.6649 | 2290 | 0.0209 | - |
| 3.6810 | 2300 | 0.0271 | - |
| 3.6970 | 2310 | 0.0184 | - |
| 3.7130 | 2320 | 0.0316 | - |
| 3.7290 | 2330 | 0.018 | - |
| 3.7450 | 2340 | 0.0209 | - |
| 3.7611 | 2350 | 0.0211 | - |
| 3.7771 | 2360 | 0.0248 | - |
| 3.7931 | 2370 | 0.0207 | - |
| 3.8091 | 2380 | 0.0159 | - |
| 3.8252 | 2390 | 0.0222 | - |
| 3.8412 | 2400 | 0.0193 | - |
| 3.8572 | 2410 | 0.0146 | - |
| 3.8732 | 2420 | 0.0187 | - |
| 3.8892 | 2430 | 0.0131 | - |
| 3.9053 | 2440 | 0.018 | - |
| 3.9213 | 2450 | 0.0164 | - |
| 3.9373 | 2460 | 0.0242 | - |
| 3.9533 | 2470 | 0.0195 | - |
| 3.9694 | 2480 | 0.0189 | - |
| 3.9854 | 2490 | 0.0176 | - |
| **4.0** | **2500** | **0.0207** | **0.4952** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Raymiii/lora-trained-xl | Raymiii | 2025-05-28T15:03:55Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-02T13:54:01Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks cow
widget:
- text: A photo of sks cow
output:
url: image_0.png
- text: A photo of sks cow
output:
url: image_1.png
- text: A photo of sks cow
output:
url: image_2.png
- text: A photo of sks cow
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Raymiii/lora-trained-xl
<Gallery />
## Model description
These are Raymiii/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cow to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Raymiii/lora-trained-xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
tencent/HunyuanVideo-Avatar | tencent | 2025-05-28T15:02:17Z | 0 | 58 | null | [
"safetensors",
"image-to-video",
"en",
"arxiv:2505.20156",
"region:us"
]
| image-to-video | 2025-05-26T08:26:28Z | ---
pipeline_tag: image-to-video
language:
- en
---
<!-- ## **HunyuanVideo-Avatar** -->
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/HDZpvr8F-UaHAHlsF--fh.png" height=100>
</p>
<div align="center">
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar%20Code&message=Github&color=blue"></a>
<a href="https://HunyuanVideo-Avatar.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a>
<a href="https://hunyuan.tencent.com/modelSquare/home/play?modelId=126"><img src="https://img.shields.io/static/v1?label=Playground&message=Web&color=green"></a>
<a href="https://arxiv.org/pdf/2505.20156"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a>
<a href="https://huggingface.co/tencent/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar&message=HuggingFace&color=yellow"></a>
</div>

> [**HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters**](https://arxiv.org/pdf/2505.20156) <be>
## **Abstract**
Recent years have witnessed significant progress in audio-driven human animation. However, critical challenges remain in (i) generating highly dynamic videos while preserving character consistency, (ii) achieving precise emotion alignment between characters and audio, and (iii) enabling multi-character audio-driven animation. To address these challenges, we propose HunyuanVideo-Avatar, a multimodal diffusion transformer (MM-DiT)-based model capable of simultaneously generating dynamic, emotion-controllable, and multi-character dialogue videos. Concretely, HunyuanVideo-Avatar introduces three key innovations: (i) A character image injection module is designed to replace the conventional addition-based character conditioning scheme, eliminating the inherent condition mismatch between training and inference. This ensures the dynamic motion and strong character consistency; (ii) An Audio Emotion Module (AEM) is introduced to extract and transfer the emotional cues from an emotion reference image to the target generated video, enabling fine-grained and accurate emotion style control; (iii) A Face-Aware Audio Adapter (FAA) is proposed to isolate the audio-driven character with latent-level face mask, enabling independent audio injection via cross-attention for multi-character scenarios. These innovations empower HunyuanVideo-Avatar to surpass state-of-the-art methods on benchmark datasets and a newly proposed wild dataset, generating realistic avatars in dynamic, immersive scenarios. The source code and model weights will be released publicly.
## **HunyuanVideo-Avatar Overall Architecture**

We propose **HunyuanVideo-Avatar**, a multi-modal diffusion transformer(MM-DiT)-based model capable of generating **dynamic**, **emotion-controllable**, and **multi-character dialogue** videos.
## 🎉 **HunyuanVideo-Avatar Key Features**

### **High-Dynamic and Emotion-Controllable Video Generation**
HunyuanVideo-Avatar supports animating any input **avatar images** to **high-dynamic** and **emotion-controllable** videos with simple **audio conditions**. Specifically, it takes as input **multi-style** avatar images at **arbitrary scales and resolutions**. The system supports multi-style avatars encompassing photorealistic, cartoon, 3D-rendered, and anthropomorphic characters. Multi-scale generation spanning portrait, upper-body and full-body. It generates videos with high-dynamic foreground and background, achieving superior realistic and naturalness. In addition, the system supports controlling facial emotions of the characters conditioned on input audio.
### **Various Applications**
HunyuanVideo-Avatar supports various downstream tasks and applications. For instance, the system generates talking avatar videos, which could be applied to e-commerce, online streaming, social media video production, etc. In addition, its multi-character animation feature enlarges the application such as video content creation, editing, etc.
## 🚀 Parallel Inference on Multiple GPUs
For example, to generate a video with 8 GPUs, you can use the following command:
```bash
cd HunyuanVideo-Avatar
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=./
export MODEL_BASE="./weights"
checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
--input 'assets/test.csv' \
--ckpt ${checkpoint_path} \
--sample-n-frames 129 \
--seed 128 \
--image-size 704 \
--cfg-scale 7.5 \
--infer-steps 50 \
--use-deepcache 1 \
--flow-shift-eval-video 5.0 \
--save-path ${OUTPUT_BASEPATH}
```
## 🔑 Single-gpu Inference
For example, to generate a video with 1 GPU, you can use the following command:
```bash
cd HunyuanVideo-Avatar
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=./
export MODEL_BASE=./weights
OUTPUT_BASEPATH=./results-single
checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt
export DISABLE_SP=1
CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \
--input 'assets/test.csv' \
--ckpt ${checkpoint_path} \
--sample-n-frames 129 \
--seed 128 \
--image-size 704 \
--cfg-scale 7.5 \
--infer-steps 50 \
--use-deepcache 1 \
--flow-shift-eval-video 5.0 \
--save-path ${OUTPUT_BASEPATH} \
--use-fp8 \
--infer-min
```
### Run with very low VRAM
```bash
cd HunyuanVideo-Avatar
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=./
export MODEL_BASE=./weights
OUTPUT_BASEPATH=./results-poor
checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt
export CPU_OFFLOAD=1
CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \
--input 'assets/test.csv' \
--ckpt ${checkpoint_path} \
--sample-n-frames 129 \
--seed 128 \
--image-size 704 \
--cfg-scale 7.5 \
--infer-steps 50 \
--use-deepcache 1 \
--flow-shift-eval-video 5.0 \
--save-path ${OUTPUT_BASEPATH} \
--use-fp8 \
--cpu-offload \
--infer-min
```
## Run a Gradio Server
```bash
cd HunyuanVideo-Avatar
bash ./scripts/run_gradio.sh
```
## 🔗 BibTeX
If you find [HunyuanVideo-Avatar](https://arxiv.org/pdf/2505.20156) useful for your research and applications, please cite using this BibTeX:
```BibTeX
@misc{hu2025HunyuanVideo-Avatar,
title={HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters},
author={Yi Chen and Sen Liang and Zixiang Zhou and Ziyao Huang and Yifeng Ma and Junshu Tang and Qin Lin and Yuan Zhou and Qinglin Lu},
year={2025},
eprint={2505.20156},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/pdf/2505.20156},
}
```
## Acknowledgements
We would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration. |
icefog72/Ice0.123-28.05-RP-4.2bpw | icefog72 | 2025-05-28T14:58:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2312.06795",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
]
| text-generation | 2025-05-28T14:52:29Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.123-28.05-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using H:\FModels\Mistral-7B-v0.2 as a base.
### Models Merged
The following models were included in the merge:
* G:\FModels\Ice0.115-10.05-RP
* H:\FModels\Ice0.80-03.02-RP
* F:\FModels\Ice0.122-28.05-RP
* H:\FModels\Ice0.104-13.04-RP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: H:\FModels\Ice0.104-13.04-RP
parameters:
weight: 0.3
- model: H:\FModels\Ice0.80-03.02-RP
parameters:
weight: 0.3
- model: G:\FModels\Ice0.115-10.05-RP
parameters:
weight: 0.6
- model: F:\FModels\Ice0.122-28.05-RP
parameters:
weight: 0.8
merge_method: breadcrumbs
base_model: H:\FModels\Mistral-7B-v0.2
parameters:
lambda: 0.5
density: 0.9
gamma: 0.01
dtype: bfloat16
chat_template: "alpaca"
```
|
tetttssts/llama_adapter1 | tetttssts | 2025-05-28T14:56:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T14:56:15Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tetttssts
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rivotrilnft/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-carnivorous_scented_whale | rivotrilnft | 2025-05-28T14:55:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am carnivorous scented whale",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-02T19:19:17Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-carnivorous_scented_whale
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am carnivorous scented whale
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-carnivorous_scented_whale
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rivotrilnft/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-carnivorous_scented_whale", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
georgeiac00/sft_llama3_instruct_full_prec_full_data_5_ep | georgeiac00 | 2025-05-28T14:53:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T14:52:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mjs227/llama-rw-sft | mjs227 | 2025-05-28T14:52:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T13:54:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VOidChill/priyanka-kumar | VOidChill | 2025-05-28T14:51:40Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T14:51:37Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Priyankakumar
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Priyanka Kumar
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Priyankakumar` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
HosenM12/Hosen | HosenM12 | 2025-05-28T14:50:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-28T14:50:44Z | ---
license: creativeml-openrail-m
---
|
5xgrowth/karl | 5xgrowth | 2025-05-28T14:50:08Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-28T14:13:57Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
TOMFORD79/C2MIX5 | TOMFORD79 | 2025-05-28T14:45:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:41:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AravindS373/bird_multi_700 | AravindS373 | 2025-05-28T14:45:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"region:us"
]
| null | 2025-05-28T10:37:07Z | ---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
TOMFORD79/X2H7 | TOMFORD79 | 2025-05-28T14:45:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:39:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb7xqr600e1clexp642ux771_cmb80n6r50ezklexpa3bqyb3n | BootesVoid | 2025-05-28T14:44:39Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T14:44:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EVA
---
# Cmb7Xqr600E1Clexp642Ux771_Cmb80N6R50Ezklexpa3Bqyb3N
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EVA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EVA",
"lora_weights": "https://huggingface.co/BootesVoid/cmb7xqr600e1clexp642ux771_cmb80n6r50ezklexpa3bqyb3n/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb7xqr600e1clexp642ux771_cmb80n6r50ezklexpa3bqyb3n', weight_name='lora.safetensors')
image = pipeline('EVA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb7xqr600e1clexp642ux771_cmb80n6r50ezklexpa3bqyb3n/discussions) to add images that show off what you’ve made with this LoRA.
|
Usertrr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rough_curious_rooster | Usertrr | 2025-05-28T14:44:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rough curious rooster",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-12T07:08:50Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rough_curious_rooster
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rough curious rooster
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rough_curious_rooster
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Usertrr/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-rough_curious_rooster", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
h34v7/DXP-Zero-V1.2-24b-Small-Instruct-i1-GGUF | h34v7 | 2025-05-28T14:43:22Z | 73 | 0 | null | [
"gguf",
"base_model:h34v7/DXP-Zero-V1.2-24b-Small-Instruct",
"base_model:quantized:h34v7/DXP-Zero-V1.2-24b-Small-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-05-26T17:25:46Z | ---
license: apache-2.0
base_model:
- h34v7/DXP-Zero-V1.2-24b-Small-Instruct
---
The 1-bit might be incoherent.
i1-IQ2_XXS is the bare minimum i would recommend. it run so slow (less than half a token persec 0.3 to 0.45 t/s) on Snapdragon Elite Android but it is amazingly coherent and awesome and use about 7.1GB of ram using 512 context. Haven't tried on Raspberry Pi.
Output sample using i1-IQ2_XXS on Termux android:
```
User: good what di you think about solar system?
Assistant: The solar system is truly fascinating! It consists of the Sun, eight
planets (in order from closest to farthest: Mercury, Venus, Earth, Mars,
Jupiter, Saturn, Uranus, and Neptune), as well as various dwarf planets
like Pluto and Ceres. The system also includes a large number of moons,
asteroids, comets, and other small objects.
Some interesting facts about the solar system:
1. It's about 4.6 billion years old, formed from a collapsing cloud of
interstellar dust and gas.
2. The Sun accounts for 99% of the mass of the solar system.
3. The planets are divided into terrestrial (rocky) planets and gas giants.
4. Many moons in the solar system show signs of activity, like geysers or
ice volcanoes.
5. The Oort Cloud, a vast collection of icy objects, is the farthest part
of the solar system.
6. We continue to make new discoveries, like the recent finding of new
dwarf planets and the exploration of Mars by rovers.
7. The Voyager 2 spacecraft has traveled further from the Sun than any
other man-made object.
8. Pluto's reclassification as a dwarf planet sparked debates about
planetary definitions.
9. The exoplanets we've discovered in other star systems have expanded our
understanding of what a "planet" can be.
10. The New Horizons mission gave us close-up views of Pluto and the Kuiper
Belt object Arrokoth.
What specific aspect of the solar system are you most interested in? I'd be
happy to share more information on any particular part of our cosmic
neighborhood!
```
|
bykaralord/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_thorny_cassowary | bykaralord | 2025-05-28T14:41:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lazy thorny cassowary",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T13:41:38Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_thorny_cassowary
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lazy thorny cassowary
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_thorny_cassowary
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bykaralord/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_thorny_cassowary", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jenniferlumeng/sagittal-b4-finetuned-segments | jenniferlumeng | 2025-05-28T14:38:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b4",
"base_model:finetune:nvidia/mit-b4",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2025-05-28T06:08:15Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b4
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: sagittal-b4-finetuned-segments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagittal-b4-finetuned-segments
This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the jenniferlumeng/Sagittal dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5610
- Mean Iou: 0.6387
- Mean Accuracy: 0.7597
- Overall Accuracy: 0.7684
- Accuracy Background: nan
- Accuracy Olfactory bulb: 0.7170
- Accuracy Anterior olfactory nucleus: 0.6456
- Accuracy Basal ganglia: 0.7788
- Accuracy Cortex: 0.7965
- Accuracy Hypothalamus: 0.6187
- Accuracy Thalamus: 0.7553
- Accuracy Hippocampus: 0.8524
- Accuracy Midbrain: 0.8602
- Accuracy Cerebellum: 0.7899
- Accuracy Pons and medulla: 0.7831
- Iou Background: 0.0
- Iou Olfactory bulb: 0.6979
- Iou Anterior olfactory nucleus: 0.5897
- Iou Basal ganglia: 0.7036
- Iou Cortex: 0.7569
- Iou Hypothalamus: 0.5348
- Iou Thalamus: 0.7058
- Iou Hippocampus: 0.8192
- Iou Midbrain: 0.7187
- Iou Cerebellum: 0.7689
- Iou Pons and medulla: 0.7295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Olfactory bulb | Accuracy Anterior olfactory nucleus | Accuracy Basal ganglia | Accuracy Cortex | Accuracy Hypothalamus | Accuracy Thalamus | Accuracy Hippocampus | Accuracy Midbrain | Accuracy Cerebellum | Accuracy Pons and medulla | Iou Background | Iou Olfactory bulb | Iou Anterior olfactory nucleus | Iou Basal ganglia | Iou Cortex | Iou Hypothalamus | Iou Thalamus | Iou Hippocampus | Iou Midbrain | Iou Cerebellum | Iou Pons and medulla |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:-----------------------:|:-----------------------------------:|:----------------------:|:---------------:|:---------------------:|:-----------------:|:--------------------:|:-----------------:|:-------------------:|:-------------------------:|:--------------:|:------------------:|:------------------------------:|:-----------------:|:----------:|:----------------:|:------------:|:---------------:|:------------:|:--------------:|:--------------------:|
| 1.1622 | 3.3333 | 20 | 1.5275 | 0.2290 | 0.2936 | 0.3032 | nan | 0.3546 | 0.0661 | 0.3629 | 0.4619 | 0.3534 | 0.0245 | 0.4997 | 0.3121 | 0.3472 | 0.1532 | 0.0 | 0.3179 | 0.0654 | 0.2793 | 0.3896 | 0.3249 | 0.0232 | 0.4386 | 0.1852 | 0.3458 | 0.1488 |
| 0.8211 | 6.6667 | 40 | 1.0160 | 0.3284 | 0.4622 | 0.4886 | nan | 0.4025 | 0.2457 | 0.4573 | 0.7211 | 0.3213 | 0.7259 | 0.5201 | 0.4266 | 0.3944 | 0.4069 | 0.0 | 0.3734 | 0.2422 | 0.3826 | 0.4803 | 0.2946 | 0.2820 | 0.4623 | 0.3025 | 0.3941 | 0.3986 |
| 0.2823 | 10.0 | 60 | 0.9503 | 0.4263 | 0.5468 | 0.5681 | nan | 0.4108 | 0.3905 | 0.5659 | 0.6721 | 0.4120 | 0.7734 | 0.5256 | 0.6114 | 0.6022 | 0.5039 | 0.0 | 0.4030 | 0.3792 | 0.4867 | 0.6002 | 0.3712 | 0.5518 | 0.4860 | 0.4593 | 0.4789 | 0.4733 |
| 0.4346 | 13.3333 | 80 | 0.6683 | 0.5384 | 0.6798 | 0.7221 | nan | 0.5424 | 0.5376 | 0.7562 | 0.8557 | 0.5587 | 0.8000 | 0.5245 | 0.7792 | 0.7022 | 0.7419 | 0.0 | 0.5198 | 0.5055 | 0.6668 | 0.7488 | 0.4901 | 0.6675 | 0.4835 | 0.5702 | 0.6572 | 0.6131 |
| 0.1348 | 16.6667 | 100 | 0.5909 | 0.5275 | 0.6836 | 0.7131 | nan | 0.5024 | 0.5379 | 0.6884 | 0.7820 | 0.6158 | 0.8733 | 0.5253 | 0.7972 | 0.8618 | 0.6525 | 0.0 | 0.4253 | 0.5029 | 0.5920 | 0.7553 | 0.5203 | 0.5693 | 0.4756 | 0.5773 | 0.7332 | 0.6511 |
| 0.1317 | 20.0 | 120 | 0.5279 | 0.6000 | 0.7499 | 0.7699 | nan | 0.6679 | 0.6691 | 0.6804 | 0.9212 | 0.6790 | 0.7882 | 0.7477 | 0.8093 | 0.8195 | 0.7169 | 0.0 | 0.6282 | 0.6283 | 0.6090 | 0.7796 | 0.6010 | 0.5967 | 0.6350 | 0.6556 | 0.7736 | 0.6933 |
| 0.2667 | 23.3333 | 140 | 0.6451 | 0.5482 | 0.6840 | 0.6961 | nan | 0.6738 | 0.5915 | 0.6175 | 0.7717 | 0.6215 | 0.7162 | 0.7077 | 0.7127 | 0.7174 | 0.7097 | 0.0 | 0.6220 | 0.5489 | 0.5643 | 0.7119 | 0.5204 | 0.5866 | 0.6468 | 0.5720 | 0.6582 | 0.5986 |
| 0.3673 | 26.6667 | 160 | 0.5395 | 0.5843 | 0.7265 | 0.7280 | nan | 0.7682 | 0.6859 | 0.6984 | 0.8040 | 0.6214 | 0.7752 | 0.8302 | 0.7929 | 0.5215 | 0.7669 | 0.0 | 0.7397 | 0.6224 | 0.6607 | 0.6138 | 0.5355 | 0.6739 | 0.6855 | 0.6733 | 0.5017 | 0.7208 |
| 0.345 | 30.0 | 180 | 0.4865 | 0.6101 | 0.7534 | 0.7675 | nan | 0.7244 | 0.7111 | 0.7634 | 0.9073 | 0.7027 | 0.7449 | 0.7589 | 0.8557 | 0.6596 | 0.7061 | 0.0 | 0.7089 | 0.6334 | 0.6898 | 0.6957 | 0.5837 | 0.6786 | 0.6832 | 0.7091 | 0.6404 | 0.6886 |
| 0.1892 | 33.3333 | 200 | 0.5088 | 0.6134 | 0.7589 | 0.7739 | nan | 0.6971 | 0.6785 | 0.7077 | 0.8255 | 0.6950 | 0.7285 | 0.8019 | 0.7823 | 0.8302 | 0.8419 | 0.0 | 0.6760 | 0.6139 | 0.6244 | 0.7471 | 0.5948 | 0.6243 | 0.7012 | 0.6364 | 0.7359 | 0.7934 |
| 0.283 | 36.6667 | 220 | 0.5012 | 0.6032 | 0.7387 | 0.7525 | nan | 0.6736 | 0.6548 | 0.6843 | 0.8329 | 0.6138 | 0.7489 | 0.8097 | 0.7708 | 0.8219 | 0.7763 | 0.0 | 0.6511 | 0.5898 | 0.5952 | 0.7460 | 0.5490 | 0.6433 | 0.7184 | 0.6573 | 0.7478 | 0.7373 |
| 0.3255 | 40.0 | 240 | 0.4538 | 0.6439 | 0.7751 | 0.7926 | nan | 0.6323 | 0.6450 | 0.7895 | 0.8253 | 0.6834 | 0.8150 | 0.8167 | 0.8587 | 0.8580 | 0.8274 | 0.0 | 0.6155 | 0.5910 | 0.6879 | 0.7771 | 0.5999 | 0.7198 | 0.7293 | 0.7568 | 0.8085 | 0.7969 |
| 0.148 | 43.3333 | 260 | 0.5867 | 0.5934 | 0.7219 | 0.7242 | nan | 0.5819 | 0.6130 | 0.7968 | 0.7211 | 0.6326 | 0.7201 | 0.8921 | 0.7792 | 0.7586 | 0.7235 | 0.0 | 0.5698 | 0.5527 | 0.7039 | 0.6853 | 0.5568 | 0.6735 | 0.7675 | 0.6665 | 0.6775 | 0.6738 |
| 0.2442 | 46.6667 | 280 | 0.5438 | 0.6123 | 0.7363 | 0.7502 | nan | 0.6327 | 0.6296 | 0.7893 | 0.7839 | 0.5792 | 0.7350 | 0.8244 | 0.8132 | 0.7980 | 0.7780 | 0.0 | 0.6221 | 0.5730 | 0.6917 | 0.7543 | 0.5004 | 0.6962 | 0.7619 | 0.6561 | 0.7618 | 0.7179 |
| 0.1645 | 50.0 | 300 | 0.5079 | 0.6323 | 0.7651 | 0.7711 | nan | 0.7346 | 0.6775 | 0.7749 | 0.7836 | 0.6132 | 0.7336 | 0.8661 | 0.8496 | 0.8220 | 0.7960 | 0.0 | 0.6891 | 0.6033 | 0.7091 | 0.7671 | 0.5359 | 0.6642 | 0.7340 | 0.7250 | 0.7986 | 0.7295 |
| 0.2699 | 53.3333 | 320 | 0.5663 | 0.6069 | 0.7401 | 0.7475 | nan | 0.7376 | 0.6604 | 0.7358 | 0.8071 | 0.6238 | 0.7225 | 0.8290 | 0.8199 | 0.7012 | 0.7635 | 0.0 | 0.7229 | 0.6041 | 0.6395 | 0.7020 | 0.5321 | 0.6502 | 0.7546 | 0.6855 | 0.6720 | 0.7131 |
| 0.2053 | 56.6667 | 340 | 0.5013 | 0.6341 | 0.7684 | 0.7750 | nan | 0.7147 | 0.6551 | 0.7489 | 0.8326 | 0.6458 | 0.8202 | 0.8792 | 0.8516 | 0.7662 | 0.7696 | 0.0 | 0.6918 | 0.5916 | 0.6512 | 0.7922 | 0.5612 | 0.7269 | 0.7414 | 0.7425 | 0.7519 | 0.7245 |
| 0.2427 | 60.0 | 360 | 0.4900 | 0.6275 | 0.7673 | 0.7721 | nan | 0.7584 | 0.7267 | 0.7405 | 0.8320 | 0.6785 | 0.7632 | 0.8677 | 0.8152 | 0.6697 | 0.8215 | 0.0 | 0.7289 | 0.6565 | 0.6647 | 0.7254 | 0.5795 | 0.6798 | 0.7799 | 0.6798 | 0.6329 | 0.7752 |
| 0.0668 | 63.3333 | 380 | 0.4845 | 0.6435 | 0.7722 | 0.7766 | nan | 0.7479 | 0.7064 | 0.7754 | 0.7830 | 0.6316 | 0.7340 | 0.8832 | 0.8429 | 0.7855 | 0.8320 | 0.0 | 0.7189 | 0.6336 | 0.6988 | 0.7412 | 0.5582 | 0.6887 | 0.8092 | 0.7069 | 0.7433 | 0.7797 |
| 0.1278 | 66.6667 | 400 | 0.5318 | 0.6220 | 0.7447 | 0.7560 | nan | 0.7063 | 0.6682 | 0.7959 | 0.7900 | 0.6057 | 0.7272 | 0.8067 | 0.8161 | 0.7418 | 0.7891 | 0.0 | 0.6939 | 0.6056 | 0.7106 | 0.7178 | 0.5353 | 0.7047 | 0.7581 | 0.6757 | 0.7074 | 0.7330 |
| 0.1184 | 70.0 | 420 | 0.5153 | 0.6434 | 0.7695 | 0.7778 | nan | 0.7200 | 0.6898 | 0.7627 | 0.8246 | 0.6589 | 0.7738 | 0.8395 | 0.8716 | 0.7698 | 0.7847 | 0.0 | 0.6858 | 0.6246 | 0.7008 | 0.7713 | 0.5730 | 0.6913 | 0.8064 | 0.7338 | 0.7483 | 0.7425 |
| 0.1317 | 73.3333 | 440 | 0.5403 | 0.6346 | 0.7586 | 0.7668 | nan | 0.7143 | 0.6677 | 0.7672 | 0.7990 | 0.5974 | 0.7354 | 0.8611 | 0.8529 | 0.8030 | 0.7876 | 0.0 | 0.6901 | 0.6051 | 0.6957 | 0.7751 | 0.5214 | 0.6903 | 0.8054 | 0.7020 | 0.7681 | 0.7279 |
| 0.0959 | 76.6667 | 460 | 0.5506 | 0.6325 | 0.7529 | 0.7596 | nan | 0.7081 | 0.6401 | 0.7706 | 0.7878 | 0.6339 | 0.7571 | 0.8553 | 0.8437 | 0.7636 | 0.7686 | 0.0 | 0.6859 | 0.5831 | 0.6937 | 0.7470 | 0.5459 | 0.7144 | 0.8041 | 0.7182 | 0.7359 | 0.7289 |
| 0.1181 | 80.0 | 480 | 0.5810 | 0.6227 | 0.7489 | 0.7528 | nan | 0.7194 | 0.6986 | 0.7478 | 0.7786 | 0.6016 | 0.7453 | 0.8501 | 0.8435 | 0.7140 | 0.7897 | 0.0 | 0.7035 | 0.6368 | 0.6793 | 0.7059 | 0.5201 | 0.6781 | 0.7990 | 0.7002 | 0.6957 | 0.7306 |
| 0.1272 | 83.3333 | 500 | 0.5927 | 0.6213 | 0.7406 | 0.7501 | nan | 0.7056 | 0.6515 | 0.7716 | 0.7891 | 0.6042 | 0.7289 | 0.8221 | 0.8266 | 0.7345 | 0.7725 | 0.0 | 0.6898 | 0.5965 | 0.7000 | 0.7362 | 0.5184 | 0.7017 | 0.7840 | 0.6860 | 0.7035 | 0.7186 |
| 0.1653 | 86.6667 | 520 | 0.5653 | 0.6368 | 0.7586 | 0.7645 | nan | 0.7195 | 0.6718 | 0.7697 | 0.7843 | 0.6201 | 0.7360 | 0.8479 | 0.8640 | 0.8044 | 0.7683 | 0.0 | 0.7011 | 0.6056 | 0.7009 | 0.7658 | 0.5286 | 0.7009 | 0.8013 | 0.7081 | 0.7787 | 0.7136 |
| 0.1633 | 90.0 | 540 | 0.5539 | 0.6421 | 0.7641 | 0.7693 | nan | 0.7257 | 0.6989 | 0.7533 | 0.8062 | 0.6249 | 0.7595 | 0.8532 | 0.8622 | 0.7844 | 0.7728 | 0.0 | 0.7107 | 0.6324 | 0.6835 | 0.7668 | 0.5405 | 0.6966 | 0.8200 | 0.7257 | 0.7640 | 0.7232 |
| 0.0863 | 93.3333 | 560 | 0.5737 | 0.6348 | 0.7544 | 0.7607 | nan | 0.7237 | 0.6606 | 0.7705 | 0.7775 | 0.6186 | 0.7401 | 0.8493 | 0.8438 | 0.7727 | 0.7868 | 0.0 | 0.7032 | 0.5988 | 0.7055 | 0.7366 | 0.5337 | 0.7141 | 0.8136 | 0.7060 | 0.7485 | 0.7230 |
| 0.072 | 96.6667 | 580 | 0.5544 | 0.6400 | 0.7616 | 0.7691 | nan | 0.7153 | 0.6546 | 0.7755 | 0.7964 | 0.6272 | 0.7607 | 0.8605 | 0.8571 | 0.7832 | 0.7854 | 0.0 | 0.6956 | 0.5972 | 0.7029 | 0.7550 | 0.5402 | 0.7088 | 0.8237 | 0.7215 | 0.7640 | 0.7309 |
| 0.1014 | 100.0 | 600 | 0.5610 | 0.6387 | 0.7597 | 0.7684 | nan | 0.7170 | 0.6456 | 0.7788 | 0.7965 | 0.6187 | 0.7553 | 0.8524 | 0.8602 | 0.7899 | 0.7831 | 0.0 | 0.6979 | 0.5897 | 0.7036 | 0.7569 | 0.5348 | 0.7058 | 0.8192 | 0.7187 | 0.7689 | 0.7295 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.16.1
- Tokenizers 0.21.1
|
Pelochus/ezrkllm-collection | Pelochus | 2025-05-28T14:36:35Z | 0 | 18 | null | [
"rockchip",
"rk3588",
"rkllm",
"text-generation-inference",
"text-generation",
"license:mit",
"region:us"
]
| text-generation | 2024-04-10T16:22:07Z | ---
license: mit
tags:
- rockchip
- rk3588
- rkllm
- text-generation-inference
pipeline_tag: text-generation
---
# ezrkllm-collection
Collection of LLMs compatible with Rockchip's chips using their rkllm-toolkit.
This repo contains the converted models for running on the RK3588 NPU found in SBCs like Orange Pi 5, NanoPi R6 and Radxa Rock 5.
Check the main repo on GitHub for how to install and use: https://github.com/Pelochus/ezrknpu
## Available LLMs
Before running any LLM, take into account that the required RAM is between 1.5-3 times the model size (this is an estimation, haven't done extensive testing yet).
Right now, only converted the following models:
| LLM | Parameters | Link |
| --------------------- | ----------- | ------------------------------------------------------------- |
| Qwen 2 | 1.5B | https://huggingface.co/Pelochus/deepseek-R1-distill-qwen-1.5B |
| Qwen Chat | 1.8B | https://huggingface.co/Pelochus/qwen-1_8B-rk3588 |
| Gemma | 2B | https://huggingface.co/Pelochus/gemma-2b-rk3588 |
| Microsoft Phi-2 | 2.7B | https://huggingface.co/Pelochus/phi-2-rk3588 |
| Microsoft Phi-3 Mini | 3.8B | https://huggingface.co/Pelochus/phi-3-mini-rk3588 |
| Llama 2 7B | 7B | https://huggingface.co/Pelochus/llama2-chat-7b-hf-rk3588 |
| Llama 2 13B | 13B | https://huggingface.co/Pelochus/llama2-chat-13b-hf-rk3588 |
| TinyLlama v1 | 1.1B | https://huggingface.co/Pelochus/tinyllama-v1-rk3588 |
| Qwen 1.5 Chat | 4B | https://huggingface.co/Pelochus/qwen1.5-chat-4B-rk3588 |
| Qwen 2 | 1.5B | https://huggingface.co/Pelochus/qwen2-1_5B-rk3588 |
Llama 2 was converted using Azure servers.
For reference, converting Phi-2 peaked at about 15 GBs of RAM + 25 GBs of swap (counting OS, but that was using about 2 GBs max).
Converting Llama 2 7B peaked at about 32 GBs of RAM + 50 GB of swap.
## Downloading a model
Use:
`git clone LINK_FROM_PREVIOUS_TABLE_HERE`
And then (may not be necessary):
`git lfs pull`
If the first clone gives you problems (takes too long) you can also:
`GIT_LFS_SKIP_SMUDGE=1 git clone LINK_FROM_PREVIOUS_TABLE_HERE`
And then 'git lfs pull' inside the cloned folder to download the full model.
## RKLLM parameters used
RK3588 **only supports w8a8 quantization**, so that was the selected quantization for ALL models.
Aside from that, RKLLM toolkit allows for no optimization (0) and optimization (1).
All models are optimized.
## Future additions
- [x] Converting other compatible LLMs
- [ ] Adding other compatible Rockchip's SoCs
## More info
- My fork for rknn-llm: https://github.com/Pelochus/ezrknn-llm
- Original Rockchip's LLMs repo: https://github.com/airockchip/rknn-llm |
HosenM11/Hosen | HosenM11 | 2025-05-28T14:36:26Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2025-05-28T14:36:26Z | ---
license: bigscience-bloom-rail-1.0
---
|
sinistera/Tiny-LLM-Q8_0-GGUF | sinistera | 2025-05-28T14:35:45Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:HuggingFaceFW/fineweb",
"base_model:arnir0/Tiny-LLM",
"base_model:quantized:arnir0/Tiny-LLM",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:35:43Z | ---
license: mit
datasets:
- HuggingFaceFW/fineweb
pipeline_tag: text-generation
base_model: arnir0/Tiny-LLM
tags:
- llama-cpp
- gguf-my-repo
---
# sinistera/Tiny-LLM-Q8_0-GGUF
This model was converted to GGUF format from [`arnir0/Tiny-LLM`](https://huggingface.co/arnir0/Tiny-LLM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/arnir0/Tiny-LLM) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sinistera/Tiny-LLM-Q8_0-GGUF --hf-file tiny-llm-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sinistera/Tiny-LLM-Q8_0-GGUF --hf-file tiny-llm-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sinistera/Tiny-LLM-Q8_0-GGUF --hf-file tiny-llm-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sinistera/Tiny-LLM-Q8_0-GGUF --hf-file tiny-llm-q8_0.gguf -c 2048
```
|
selsar/nli-behavioral-groups | selsar | 2025-05-28T14:27:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-28T14:26:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mnickkk/csm-aika-2 | mnickkk | 2025-05-28T14:26:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"csm",
"text-to-audio",
"text-generation-inference",
"unsloth",
"en",
"base_model:mnickkk/csm",
"base_model:finetune:mnickkk/csm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2025-05-28T12:07:04Z | ---
base_model: mnickkk/csm
tags:
- text-generation-inference
- transformers
- unsloth
- csm
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mnickkk
- **License:** apache-2.0
- **Finetuned from model :** mnickkk/csm
This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HPLT/hplt2c_nld_checkpoints | HPLT | 2025-05-28T14:25:10Z | 0 | 0 | null | [
"pytorch",
"llama",
"HPLT",
"decoder",
"nl",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T08:49:52Z | ---
language:
- nl
tags:
- HPLT
- decoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
---
# HPLT v2.0 - Cleaned - Dutch
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the decoder-only language models trained on [HPLT2.0_cleaned](https://huggingface.co/datasets/HPLT/HPLT2.0_cleaned).
All the HPLT decoder-only models use the same hyper-parameters, roughly following the llama architecture with 2.15B parameters in total:
- hidden size: 2048
- attention heads: 32
- layers: 24
- sequence length: 2048
## Intermediate checkpoints
We are releasing intermediate checkpoints for each model at intervals of every 1000 training steps in separate branches. The naming convention is `checkpoint_00xxxx00`: for example, `checkpoint_0005000`. The checkpoints range from checkpoint_0001000 to checkpoint_0047684 and the latter is in the main branch.
## Cite us
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
``` |
Diamantis99/AQocBC3 | Diamantis99 | 2025-05-28T14:24:58Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T14:24:54Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# PSPNet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "timm-tf_efficientnet_lite4",
"encoder_weights": "imagenet",
"encoder_depth": 3,
"psp_out_channels": 512,
"decoder_use_norm": "batchnorm",
"psp_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 8,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7031219005584717,
"test_dataset_iou": 0.7441089749336243
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
myfi/parser_model_sgpt_v2 | myfi | 2025-05-28T14:23:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:20:10Z | ---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kudod/bert-base-uncased-ner-ghtk-cs-new-data-3090-28may-1 | Kudod | 2025-05-28T14:23:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-05-28T14:08:23Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-ner-ghtk-cs-new-data-3090-28may-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ner-ghtk-cs-new-data-3090-28may-1
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1753
- cmt: {'precision': 0.6875, 'recall': 0.7857142857142857, 'f1': 0.7333333333333334, 'number': 14}
- A: {'precision': 0.9053117782909931, 'recall': 0.937799043062201, 'f1': 0.9212690951821386, 'number': 418}
- Gân hàng: {'precision': 0.7804878048780488, 'recall': 0.9142857142857143, 'f1': 0.8421052631578947, 'number': 35}
- Hương thức thanh toán: {'precision': 0.7222222222222222, 'recall': 0.8666666666666667, 'f1': 0.7878787878787877, 'number': 30}
- Hối lượng: {'precision': 0.3, 'recall': 0.25, 'f1': 0.2727272727272727, 'number': 12}
- Iền: {'precision': 0.6415094339622641, 'recall': 0.8717948717948718, 'f1': 0.7391304347826088, 'number': 39}
- Mail: {'precision': 0.931129476584022, 'recall': 0.9337016574585635, 'f1': 0.9324137931034482, 'number': 362}
- Ên người: {'precision': 0.3939393939393939, 'recall': 0.43333333333333335, 'f1': 0.4126984126984127, 'number': 30}
- Ơn vị đo: {'precision': 0.6071428571428571, 'recall': 0.6071428571428571, 'f1': 0.6071428571428571, 'number': 28}
- Ản phẩm cụ thể: {'precision': 0.7666666666666667, 'recall': 0.359375, 'f1': 0.4893617021276596, 'number': 128}
- Ản phẩm trừu tượng: {'precision': 0.59375, 'recall': 0.3333333333333333, 'f1': 0.42696629213483145, 'number': 57}
- Ịa chỉ cụ thể: {'precision': 0.15384615384615385, 'recall': 0.13333333333333333, 'f1': 0.14285714285714288, 'number': 75}
- Ịa chỉ trừu tượng: {'precision': 0.7049180327868853, 'recall': 0.5733333333333334, 'f1': 0.6323529411764707, 'number': 75}
- Overall Precision: 0.7994
- Overall Recall: 0.7552
- Overall F1: 0.7766
- Overall Accuracy: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | cmt | A | Gân hàng | Hương thức thanh toán | Hối lượng | Iền | Mail | Ên người | Ơn vị đo | Ản phẩm cụ thể | Ản phẩm trừu tượng | Ịa chỉ cụ thể | Ịa chỉ trừu tượng | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 1.0 | 368 | 0.1682 | {'precision': 0.4074074074074074, 'recall': 0.7857142857142857, 'f1': 0.5365853658536585, 'number': 14} | {'precision': 0.9020979020979021, 'recall': 0.9258373205741627, 'f1': 0.9138134592680047, 'number': 418} | {'precision': 0.5813953488372093, 'recall': 0.7142857142857143, 'f1': 0.6410256410256411, 'number': 35} | {'precision': 0.32, 'recall': 0.26666666666666666, 'f1': 0.2909090909090909, 'number': 30} | {'precision': 0.09090909090909091, 'recall': 0.16666666666666666, 'f1': 0.11764705882352942, 'number': 12} | {'precision': 0.5396825396825397, 'recall': 0.8717948717948718, 'f1': 0.6666666666666666, 'number': 39} | {'precision': 0.8506024096385543, 'recall': 0.9751381215469613, 'f1': 0.9086229086229086, 'number': 362} | {'precision': 0.21739130434782608, 'recall': 0.16666666666666666, 'f1': 0.18867924528301885, 'number': 30} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 28} | {'precision': 1.0, 'recall': 0.0390625, 'f1': 0.07518796992481203, 'number': 128} | {'precision': 0.6428571428571429, 'recall': 0.15789473684210525, 'f1': 0.25352112676056343, 'number': 57} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 75} | {'precision': 0.6779661016949152, 'recall': 0.5333333333333333, 'f1': 0.5970149253731344, 'number': 75} | 0.7772 | 0.6746 | 0.7223 | 0.9519 |
| 0.189 | 2.0 | 736 | 0.1261 | {'precision': 0.631578947368421, 'recall': 0.8571428571428571, 'f1': 0.7272727272727273, 'number': 14} | {'precision': 0.8813186813186813, 'recall': 0.9593301435406698, 'f1': 0.9186712485681557, 'number': 418} | {'precision': 0.6744186046511628, 'recall': 0.8285714285714286, 'f1': 0.7435897435897435, 'number': 35} | {'precision': 0.5238095238095238, 'recall': 0.7333333333333333, 'f1': 0.611111111111111, 'number': 30} | {'precision': 0.19047619047619047, 'recall': 0.3333333333333333, 'f1': 0.24242424242424246, 'number': 12} | {'precision': 0.5483870967741935, 'recall': 0.4358974358974359, 'f1': 0.4857142857142857, 'number': 39} | {'precision': 0.9716981132075472, 'recall': 0.8535911602209945, 'f1': 0.9088235294117647, 'number': 362} | {'precision': 0.3333333333333333, 'recall': 0.23333333333333334, 'f1': 0.27450980392156865, 'number': 30} | {'precision': 0.48, 'recall': 0.42857142857142855, 'f1': 0.4528301886792452, 'number': 28} | {'precision': 0.5952380952380952, 'recall': 0.1953125, 'f1': 0.29411764705882354, 'number': 128} | {'precision': 0.6, 'recall': 0.2631578947368421, 'f1': 0.36585365853658536, 'number': 57} | {'precision': 0.05, 'recall': 0.04, 'f1': 0.044444444444444446, 'number': 75} | {'precision': 0.6349206349206349, 'recall': 0.5333333333333333, 'f1': 0.5797101449275363, 'number': 75} | 0.7691 | 0.6876 | 0.7261 | 0.9593 |
| 0.0843 | 3.0 | 1104 | 0.1251 | {'precision': 0.7647058823529411, 'recall': 0.9285714285714286, 'f1': 0.8387096774193549, 'number': 14} | {'precision': 0.917910447761194, 'recall': 0.8827751196172249, 'f1': 0.9, 'number': 418} | {'precision': 0.7317073170731707, 'recall': 0.8571428571428571, 'f1': 0.7894736842105263, 'number': 35} | {'precision': 0.6875, 'recall': 0.7333333333333333, 'f1': 0.7096774193548386, 'number': 30} | {'precision': 0.16666666666666666, 'recall': 0.08333333333333333, 'f1': 0.1111111111111111, 'number': 12} | {'precision': 0.65, 'recall': 0.6666666666666666, 'f1': 0.6582278481012659, 'number': 39} | {'precision': 0.8935643564356436, 'recall': 0.9972375690607734, 'f1': 0.9425587467362925, 'number': 362} | {'precision': 0.29411764705882354, 'recall': 0.3333333333333333, 'f1': 0.3125, 'number': 30} | {'precision': 0.391304347826087, 'recall': 0.32142857142857145, 'f1': 0.35294117647058826, 'number': 28} | {'precision': 0.7222222222222222, 'recall': 0.203125, 'f1': 0.3170731707317073, 'number': 128} | {'precision': 0.5, 'recall': 0.2631578947368421, 'f1': 0.3448275862068966, 'number': 57} | {'precision': 0.057971014492753624, 'recall': 0.05333333333333334, 'f1': 0.05555555555555556, 'number': 75} | {'precision': 0.7884615384615384, 'recall': 0.5466666666666666, 'f1': 0.6456692913385826, 'number': 75} | 0.7816 | 0.7114 | 0.7449 | 0.9628 |
| 0.0843 | 4.0 | 1472 | 0.1403 | {'precision': 0.7857142857142857, 'recall': 0.7857142857142857, 'f1': 0.7857142857142857, 'number': 14} | {'precision': 0.9150943396226415, 'recall': 0.9282296650717703, 'f1': 0.9216152019002375, 'number': 418} | {'precision': 0.717948717948718, 'recall': 0.8, 'f1': 0.7567567567567569, 'number': 35} | {'precision': 0.6388888888888888, 'recall': 0.7666666666666667, 'f1': 0.696969696969697, 'number': 30} | {'precision': 0.375, 'recall': 0.25, 'f1': 0.3, 'number': 12} | {'precision': 0.660377358490566, 'recall': 0.8974358974358975, 'f1': 0.7608695652173912, 'number': 39} | {'precision': 0.9083557951482479, 'recall': 0.930939226519337, 'f1': 0.9195088676671214, 'number': 362} | {'precision': 0.37037037037037035, 'recall': 0.3333333333333333, 'f1': 0.3508771929824561, 'number': 30} | {'precision': 0.45454545454545453, 'recall': 0.35714285714285715, 'f1': 0.4, 'number': 28} | {'precision': 0.7272727272727273, 'recall': 0.25, 'f1': 0.37209302325581395, 'number': 128} | {'precision': 0.6666666666666666, 'recall': 0.24561403508771928, 'f1': 0.358974358974359, 'number': 57} | {'precision': 0.1111111111111111, 'recall': 0.08, 'f1': 0.09302325581395349, 'number': 75} | {'precision': 0.6896551724137931, 'recall': 0.5333333333333333, 'f1': 0.6015037593984963, 'number': 75} | 0.8002 | 0.7191 | 0.7575 | 0.9624 |
| 0.0555 | 5.0 | 1840 | 0.1725 | {'precision': 0.8235294117647058, 'recall': 1.0, 'f1': 0.9032258064516129, 'number': 14} | {'precision': 0.9144893111638955, 'recall': 0.9210526315789473, 'f1': 0.9177592371871275, 'number': 418} | {'precision': 0.8157894736842105, 'recall': 0.8857142857142857, 'f1': 0.8493150684931505, 'number': 35} | {'precision': 0.7352941176470589, 'recall': 0.8333333333333334, 'f1': 0.78125, 'number': 30} | {'precision': 0.4, 'recall': 0.3333333333333333, 'f1': 0.3636363636363636, 'number': 12} | {'precision': 0.6153846153846154, 'recall': 0.8205128205128205, 'f1': 0.7032967032967034, 'number': 39} | {'precision': 0.9896907216494846, 'recall': 0.7955801104972375, 'f1': 0.8820826952526799, 'number': 362} | {'precision': 0.4, 'recall': 0.3333333333333333, 'f1': 0.3636363636363636, 'number': 30} | {'precision': 0.5384615384615384, 'recall': 0.5, 'f1': 0.5185185185185186, 'number': 28} | {'precision': 0.8275862068965517, 'recall': 0.1875, 'f1': 0.30573248407643316, 'number': 128} | {'precision': 0.65, 'recall': 0.22807017543859648, 'f1': 0.33766233766233766, 'number': 57} | {'precision': 0.21951219512195122, 'recall': 0.12, 'f1': 0.15517241379310345, 'number': 75} | {'precision': 0.7017543859649122, 'recall': 0.5333333333333333, 'f1': 0.6060606060606061, 'number': 75} | 0.8379 | 0.6823 | 0.7521 | 0.9628 |
| 0.0315 | 6.0 | 2208 | 0.1663 | {'precision': 0.7333333333333333, 'recall': 0.7857142857142857, 'f1': 0.7586206896551724, 'number': 14} | {'precision': 0.9097222222222222, 'recall': 0.9401913875598086, 'f1': 0.9247058823529412, 'number': 418} | {'precision': 0.725, 'recall': 0.8285714285714286, 'f1': 0.7733333333333333, 'number': 35} | {'precision': 0.6944444444444444, 'recall': 0.8333333333333334, 'f1': 0.7575757575757577, 'number': 30} | {'precision': 0.2222222222222222, 'recall': 0.16666666666666666, 'f1': 0.1904761904761905, 'number': 12} | {'precision': 0.5714285714285714, 'recall': 0.8205128205128205, 'f1': 0.6736842105263158, 'number': 39} | {'precision': 0.9324324324324325, 'recall': 0.9530386740331491, 'f1': 0.9426229508196722, 'number': 362} | {'precision': 0.391304347826087, 'recall': 0.3, 'f1': 0.33962264150943394, 'number': 30} | {'precision': 0.6333333333333333, 'recall': 0.6785714285714286, 'f1': 0.6551724137931035, 'number': 28} | {'precision': 0.7241379310344828, 'recall': 0.328125, 'f1': 0.45161290322580655, 'number': 128} | {'precision': 0.7391304347826086, 'recall': 0.2982456140350877, 'f1': 0.425, 'number': 57} | {'precision': 0.14516129032258066, 'recall': 0.12, 'f1': 0.13138686131386865, 'number': 75} | {'precision': 0.6363636363636364, 'recall': 0.56, 'f1': 0.5957446808510639, 'number': 75} | 0.7992 | 0.7483 | 0.7729 | 0.9643 |
| 0.0196 | 7.0 | 2576 | 0.1568 | {'precision': 0.5714285714285714, 'recall': 0.5714285714285714, 'f1': 0.5714285714285714, 'number': 14} | {'precision': 0.9271844660194175, 'recall': 0.9138755980861244, 'f1': 0.9204819277108434, 'number': 418} | {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 35} | {'precision': 0.6666666666666666, 'recall': 0.8666666666666667, 'f1': 0.7536231884057971, 'number': 30} | {'precision': 0.3, 'recall': 0.25, 'f1': 0.2727272727272727, 'number': 12} | {'precision': 0.5964912280701754, 'recall': 0.8717948717948718, 'f1': 0.7083333333333334, 'number': 39} | {'precision': 0.9354838709677419, 'recall': 0.8812154696132597, 'f1': 0.9075391180654339, 'number': 362} | {'precision': 0.375, 'recall': 0.4, 'f1': 0.38709677419354843, 'number': 30} | {'precision': 0.6451612903225806, 'recall': 0.7142857142857143, 'f1': 0.6779661016949152, 'number': 28} | {'precision': 0.6235294117647059, 'recall': 0.4140625, 'f1': 0.49765258215962443, 'number': 128} | {'precision': 0.5526315789473685, 'recall': 0.3684210526315789, 'f1': 0.4421052631578947, 'number': 57} | {'precision': 0.14516129032258066, 'recall': 0.12, 'f1': 0.13138686131386865, 'number': 75} | {'precision': 0.6461538461538462, 'recall': 0.56, 'f1': 0.6000000000000002, 'number': 75} | 0.7822 | 0.7360 | 0.7584 | 0.9639 |
| 0.0196 | 8.0 | 2944 | 0.1689 | {'precision': 0.6875, 'recall': 0.7857142857142857, 'f1': 0.7333333333333334, 'number': 14} | {'precision': 0.9107551487414187, 'recall': 0.9521531100478469, 'f1': 0.9309941520467836, 'number': 418} | {'precision': 0.7142857142857143, 'recall': 0.8571428571428571, 'f1': 0.7792207792207793, 'number': 35} | {'precision': 0.7352941176470589, 'recall': 0.8333333333333334, 'f1': 0.78125, 'number': 30} | {'precision': 0.42857142857142855, 'recall': 0.25, 'f1': 0.3157894736842105, 'number': 12} | {'precision': 0.6274509803921569, 'recall': 0.8205128205128205, 'f1': 0.711111111111111, 'number': 39} | {'precision': 0.9316939890710383, 'recall': 0.9419889502762431, 'f1': 0.9368131868131868, 'number': 362} | {'precision': 0.3939393939393939, 'recall': 0.43333333333333335, 'f1': 0.4126984126984127, 'number': 30} | {'precision': 0.6296296296296297, 'recall': 0.6071428571428571, 'f1': 0.6181818181818182, 'number': 28} | {'precision': 0.803921568627451, 'recall': 0.3203125, 'f1': 0.4581005586592179, 'number': 128} | {'precision': 0.6521739130434783, 'recall': 0.2631578947368421, 'f1': 0.37500000000000006, 'number': 57} | {'precision': 0.1206896551724138, 'recall': 0.09333333333333334, 'f1': 0.10526315789473685, 'number': 75} | {'precision': 0.6774193548387096, 'recall': 0.56, 'f1': 0.613138686131387, 'number': 75} | 0.8078 | 0.7483 | 0.7769 | 0.9658 |
| 0.0133 | 9.0 | 3312 | 0.1717 | {'precision': 0.6875, 'recall': 0.7857142857142857, 'f1': 0.7333333333333334, 'number': 14} | {'precision': 0.9029345372460497, 'recall': 0.9569377990430622, 'f1': 0.9291521486643437, 'number': 418} | {'precision': 0.7142857142857143, 'recall': 0.8571428571428571, 'f1': 0.7792207792207793, 'number': 35} | {'precision': 0.7428571428571429, 'recall': 0.8666666666666667, 'f1': 0.8, 'number': 30} | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 12} | {'precision': 0.6470588235294118, 'recall': 0.8461538461538461, 'f1': 0.7333333333333334, 'number': 39} | {'precision': 0.9411764705882353, 'recall': 0.8839779005524862, 'f1': 0.9116809116809117, 'number': 362} | {'precision': 0.3611111111111111, 'recall': 0.43333333333333335, 'f1': 0.39393939393939387, 'number': 30} | {'precision': 0.6206896551724138, 'recall': 0.6428571428571429, 'f1': 0.6315789473684211, 'number': 28} | {'precision': 0.7121212121212122, 'recall': 0.3671875, 'f1': 0.4845360824742268, 'number': 128} | {'precision': 0.6333333333333333, 'recall': 0.3333333333333333, 'f1': 0.43678160919540227, 'number': 57} | {'precision': 0.13846153846153847, 'recall': 0.12, 'f1': 0.12857142857142856, 'number': 75} | {'precision': 0.65625, 'recall': 0.56, 'f1': 0.60431654676259, 'number': 75} | 0.7905 | 0.7444 | 0.7668 | 0.9647 |
| 0.0085 | 10.0 | 3680 | 0.1753 | {'precision': 0.6875, 'recall': 0.7857142857142857, 'f1': 0.7333333333333334, 'number': 14} | {'precision': 0.9053117782909931, 'recall': 0.937799043062201, 'f1': 0.9212690951821386, 'number': 418} | {'precision': 0.7804878048780488, 'recall': 0.9142857142857143, 'f1': 0.8421052631578947, 'number': 35} | {'precision': 0.7222222222222222, 'recall': 0.8666666666666667, 'f1': 0.7878787878787877, 'number': 30} | {'precision': 0.3, 'recall': 0.25, 'f1': 0.2727272727272727, 'number': 12} | {'precision': 0.6415094339622641, 'recall': 0.8717948717948718, 'f1': 0.7391304347826088, 'number': 39} | {'precision': 0.931129476584022, 'recall': 0.9337016574585635, 'f1': 0.9324137931034482, 'number': 362} | {'precision': 0.3939393939393939, 'recall': 0.43333333333333335, 'f1': 0.4126984126984127, 'number': 30} | {'precision': 0.6071428571428571, 'recall': 0.6071428571428571, 'f1': 0.6071428571428571, 'number': 28} | {'precision': 0.7666666666666667, 'recall': 0.359375, 'f1': 0.4893617021276596, 'number': 128} | {'precision': 0.59375, 'recall': 0.3333333333333333, 'f1': 0.42696629213483145, 'number': 57} | {'precision': 0.15384615384615385, 'recall': 0.13333333333333333, 'f1': 0.14285714285714288, 'number': 75} | {'precision': 0.7049180327868853, 'recall': 0.5733333333333334, 'f1': 0.6323529411764707, 'number': 75} | 0.7994 | 0.7552 | 0.7766 | 0.9659 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
andresparodi/andresbatch1 | andresparodi | 2025-05-28T14:19:43Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T13:38:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: parodi
---
# Andresbatch1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `parodi` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "parodi",
"lora_weights": "https://huggingface.co/andresparodi/andresbatch1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('andresparodi/andresbatch1', weight_name='lora.safetensors')
image = pipeline('parodi').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/andresparodi/andresbatch1/discussions) to add images that show off what you’ve made with this LoRA.
|
plipustel/glue-mrpc-model | plipustel | 2025-05-28T14:19:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-28T14:18:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HosenM10/Hosen | HosenM10 | 2025-05-28T14:18:09Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-28T14:18:09Z | ---
license: creativeml-openrail-m
---
|
Diamantis99/Cr4YZfP | Diamantis99 | 2025-05-28T14:17:35Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T14:17:22Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# PSPNet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "timm-efficientnet-b7",
"encoder_weights": "imagenet",
"encoder_depth": 3,
"psp_out_channels": 512,
"decoder_use_norm": "batchnorm",
"psp_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 8,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7598253488540649,
"test_dataset_iou": 0.7843102812767029
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
TanAlexanderlz/Buatan2-Finetuned-Real_RGBCROP-Aug-8B16F | TanAlexanderlz | 2025-05-28T14:17:18Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:TanAlexanderlz/3_Buatan_RGBCROP_Aug16F-8B16F-2",
"base_model:finetune:TanAlexanderlz/3_Buatan_RGBCROP_Aug16F-8B16F-2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2025-05-28T13:22:10Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: TanAlexanderlz/3_Buatan_RGBCROP_Aug16F-8B16F-2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Buatan2-Finetuned-Real_RGBCROP-Aug-8B16F
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Buatan2-Finetuned-Real_RGBCROP-Aug-8B16F
This model is a fine-tuned version of [TanAlexanderlz/3_Buatan_RGBCROP_Aug16F-8B16F-2](https://huggingface.co/TanAlexanderlz/3_Buatan_RGBCROP_Aug16F-8B16F-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1757
- Accuracy: 0.7484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3520
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2811 | 0.0503 | 177 | 0.7131 | 0.71 |
| 0.0637 | 1.0503 | 354 | 1.7454 | 0.6567 |
| 0.215 | 2.0503 | 531 | 1.2591 | 0.72 |
| 0.0001 | 3.0503 | 708 | 1.8786 | 0.69 |
| 0.0 | 4.0503 | 885 | 2.2648 | 0.6967 |
| 0.0 | 5.0503 | 1062 | 1.9327 | 0.7133 |
| 0.028 | 6.0503 | 1239 | 1.9972 | 0.7067 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Ayca11/checkpoints | Ayca11 | 2025-05-28T14:13:33Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:mo-thecreator/vit-Facial-Expression-Recognition",
"base_model:finetune:mo-thecreator/vit-Facial-Expression-Recognition",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-13T13:40:31Z | ---
library_name: transformers
base_model: motheecreator/vit-Facial-Expression-Recognition
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3915
- Accuracy: 0.868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.5
- Tokenizers 0.21.1
|
Sebastian117/Requiem | Sebastian117 | 2025-05-28T14:11:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T14:11:30Z | ---
license: apache-2.0
---
|
jordinia/NetPro-Qwen3-4B-2105 | jordinia | 2025-05-28T14:10:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:07:49Z | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jordinia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gabriellipsa/rl_course_vizdoom_health_gathering_supreme | gabriellipsa | 2025-05-28T14:09:39Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T13:19:16Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.98 +/- 6.02
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r gabriellipsa/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Diamantis99/np2qf7v | Diamantis99 | 2025-05-28T14:08:55Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T14:08:42Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# PSPNet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "efficientnet-b7",
"encoder_weights": "imagenet",
"encoder_depth": 3,
"psp_out_channels": 512,
"decoder_use_norm": "batchnorm",
"psp_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 8,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7642234563827515,
"test_dataset_iou": 0.7958465218544006
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
FirstPotatoCoder/KhmerLLM_pretrained | FirstPotatoCoder | 2025-05-28T14:08:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:07:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ozonena2543/OZ | ozonena2543 | 2025-05-28T14:07:45Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T14:07:45Z | ---
license: apache-2.0
---
|
Tina94/anasun | Tina94 | 2025-05-28T14:07:39Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T13:50:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: anasun
---
# Anasun
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `anasun` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "anasun",
"lora_weights": "https://huggingface.co/Tina94/anasun/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Tina94/anasun', weight_name='lora.safetensors')
image = pipeline('anasun').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Tina94/anasun/discussions) to add images that show off what you’ve made with this LoRA.
|
tatsuyaaaaaaa/DeepSeek-R1-Distill-Qwen-7B-Japanese-gguf | tatsuyaaaaaaa | 2025-05-28T14:05:23Z | 0 | 0 | null | [
"gguf",
"ja",
"en",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:lightblue/DeepSeek-R1-Distill-Qwen-7B-Japanese",
"base_model:quantized:lightblue/DeepSeek-R1-Distill-Qwen-7B-Japanese",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-05-28T12:02:35Z | ---
license: apache-2.0
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
language:
- ja
- en
base_model:
- lightblue/DeepSeek-R1-Distill-Qwen-7B-Japanese
---
lightblueの[DeepSeek-R1-Distill-Qwen-7B-Japanese](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Japanese)のgguf変換したものです。
imatrix量子化時には[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)のデータセットを用いています。 |
jordinia/NetPro-Qwen3-0.6B-2105 | jordinia | 2025-05-28T14:04:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-0.6B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:04:24Z | ---
base_model: unsloth/Qwen3-0.6B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jordinia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Diamantis99/hbGqPHH | Diamantis99 | 2025-05-28T14:04:26Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T14:04:17Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# PSPNet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "inceptionv4",
"encoder_weights": "imagenet",
"encoder_depth": 3,
"psp_out_channels": 512,
"decoder_use_norm": "batchnorm",
"psp_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 8,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7647576332092285,
"test_dataset_iou": 0.8186371922492981
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
moyixiao/Qwen25-0.5B-grpo-600p | moyixiao | 2025-05-28T14:04:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T14:03:43Z | ---
library_name: transformers
tags:
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leobianco/npov_RM_model_mistralai_seed_200898_SYN_LLM_false_SYN_STRUCT_true_epochs_3_lr_5e-4_lora_8 | leobianco | 2025-05-28T14:03:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T13:26:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Akchunks/ppo-SnowballTarget | Akchunks | 2025-05-28T14:03:46Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-05-28T14:03:43Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Akchunks/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
deliciouscat/Qwen3-8B-A1.4B-0526-tmp | deliciouscat | 2025-05-28T14:02:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-28T13:57:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Diamantis99/HcWiHqh | Diamantis99 | 2025-05-28T14:01:43Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
]
| image-segmentation | 2025-05-28T14:01:34Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# PSPNet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "inceptionresnetv2",
"encoder_weights": "imagenet",
"encoder_depth": 3,
"psp_out_channels": 512,
"decoder_use_norm": "batchnorm",
"psp_dropout": 0.2,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 8,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.7463434338569641,
"test_dataset_iou": 0.7960305213928223
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
Ibisbill/stage3_SimpleRL_lr_1e5_epoch2 | Ibisbill | 2025-05-28T14:00:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T13:28:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NaykinYT/DPO-m1 | NaykinYT | 2025-05-28T14:00:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T13:58:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nadchan/trocr-encoder-only | nadchan | 2025-05-28T13:58:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/trocr-base-stage1",
"base_model:finetune:microsoft/trocr-base-stage1",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-26T18:39:32Z | ---
library_name: transformers
base_model: microsoft/trocr-base-stage1
tags:
- generated_from_trainer
model-index:
- name: trocr-encoder-only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trocr-encoder-only
This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Subsets and Splits