modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qingyanjiu/qwen3-14b-qrt-epoch3 | qingyanjiu | 2025-05-22T04:56:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T04:48:19Z | ---
base_model: input0/Qwen3-14B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** qingyanjiu
- **License:** apache-2.0
- **Finetuned from model :** input0/Qwen3-14B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JaehyeokLee/qwen3-8b-lora-summarization | JaehyeokLee | 2025-05-22T04:39:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
]
| null | 2025-05-22T04:39:05Z | ---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
the-acorn-ai/Qwen3-4B-Base-4K-KuhnPoker-Random-Role-0522-Zichen-step_00384 | the-acorn-ai | 2025-05-22T04:34:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T04:31:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wzhgba/opendwm-models | wzhgba | 2025-05-22T04:15:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-22T04:15:09Z | ---
license: apache-2.0
---
|
PaceKW/bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-new | PaceKW | 2025-05-22T03:33:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:cahya/bert-base-indonesian-1.5G",
"base_model:finetune:cahya/bert-base-indonesian-1.5G",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-22T03:31:12Z | ---
library_name: transformers
license: mit
base_model: cahya/bert-base-indonesian-1.5G
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-indonesian-1.5G-multilabel-indonesian-hate-speech-new
This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3641
- F1: 0.7802
- Roc Auc: 0.8639
- Accuracy: 0.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3106 | 1.0 | 659 | 0.2504 | 0.6779 | 0.7832 | 0.5978 |
| 0.2235 | 2.0 | 1318 | 0.2113 | 0.7466 | 0.8392 | 0.6441 |
| 0.1722 | 3.0 | 1977 | 0.2283 | 0.7511 | 0.8493 | 0.6581 |
| 0.097 | 4.0 | 2636 | 0.2421 | 0.7626 | 0.8490 | 0.6874 |
| 0.0643 | 5.0 | 3295 | 0.2727 | 0.7584 | 0.8417 | 0.6938 |
| 0.0572 | 6.0 | 3954 | 0.2817 | 0.7662 | 0.8662 | 0.6737 |
| 0.0304 | 7.0 | 4613 | 0.3075 | 0.7606 | 0.8475 | 0.6879 |
| 0.021 | 8.0 | 5272 | 0.3195 | 0.7697 | 0.8626 | 0.6932 |
| 0.0157 | 9.0 | 5931 | 0.3347 | 0.7663 | 0.8477 | 0.7052 |
| 0.0095 | 10.0 | 6590 | 0.3353 | 0.7759 | 0.8598 | 0.7118 |
| 0.0086 | 11.0 | 7249 | 0.3467 | 0.7768 | 0.8590 | 0.7136 |
| 0.0063 | 12.0 | 7908 | 0.3503 | 0.7795 | 0.8644 | 0.7128 |
| 0.0046 | 13.0 | 8567 | 0.3577 | 0.7797 | 0.8613 | 0.7153 |
| 0.0037 | 14.0 | 9226 | 0.3622 | 0.7801 | 0.8674 | 0.7115 |
| 0.0046 | 15.0 | 9885 | 0.3641 | 0.7802 | 0.8639 | 0.7156 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
DanielNRU/pollen-ner2-550 | DanielNRU | 2025-05-22T03:31:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
]
| null | 2025-05-22T03:25:58Z | ---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner2-550
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner2-550
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3432
- Precision: 0.6156
- Recall: 0.7269
- F1: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 69 | 0.3899 | 0.5340 | 0.6948 | 0.6038 |
| No log | 2.0 | 138 | 0.3667 | 0.5738 | 0.6948 | 0.6285 |
| No log | 3.0 | 207 | 0.3638 | 0.5784 | 0.7108 | 0.6378 |
| No log | 4.0 | 276 | 0.3495 | 0.6007 | 0.7068 | 0.6494 |
| No log | 5.0 | 345 | 0.3547 | 0.5805 | 0.7169 | 0.6415 |
| No log | 6.0 | 414 | 0.3432 | 0.6156 | 0.7269 | 0.6667 |
| No log | 7.0 | 483 | 0.3453 | 0.6026 | 0.7369 | 0.6631 |
| 0.7026 | 8.0 | 552 | 0.3397 | 0.6142 | 0.7289 | 0.6667 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
DLYS/Qwen2.5-14b-MEDITUNE | DLYS | 2025-05-22T00:24:19Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-11-29T06:07:42Z | ---
library_name: transformers
tags: []
---
# Qwen2.5-14b-MEDITUNE
This model is a fine-tuned Qwen2.5-14b-instruct for Henrychur/MMedBench and Korean Medical QA dataset. Korean Medical QA dataset is [here](https://www.aihub.or.kr/aihubdata/data/list.do?currMenu=115&topMenu=100&&srchDataRealmCode=REALM006)
# Test
We tested this model on sean0042/KorMedMCQA.
# Result

* The rationale refers to the reasoning process generated by Qwen-2.5-72B, and it indicates the case where this rationale was used for training.
* The 5-response ensemble determines the final answer by voting among five responses and selecting the one with the most correct answers. |
ntnu-smil/whisper-large-v3-turbo-sandi-train-1-rich-transcript-32-merged | ntnu-smil | 2025-05-21T23:40:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"wft",
"audio",
"speech",
"generated_from_trainer",
"en",
"dataset:ntnu-smil/sandi2025-ds",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-21T23:39:36Z | ---
library_name: transformers
language:
- en
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- wft
- whisper
- automatic-speech-recognition
- audio
- speech
- generated_from_trainer
datasets:
- ntnu-smil/sandi2025-ds
metrics:
- wer
model-index:
- name: whisper-large-v3-turbo-sandi-train-1-rich-transcript-32
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: ntnu-smil/sandi2025-ds
type: ntnu-smil/sandi2025-ds
metrics:
- type: wer
value: 23.248425746388847
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-turbo-sandi-train-1-rich-transcript-32
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the ntnu-smil/sandi2025-ds dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7920
- Wer: 23.2484
- Cer: 16.5856
- Decode Runtime: 203.4514
- Wer Runtime: 0.1639
- Cer Runtime: 0.3157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 732
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Decode Runtime | Wer Runtime | Cer Runtime |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------:|:--------------:|:-----------:|:-----------:|
| 0.9653 | 0.1667 | 122 | 0.8334 | 103.3411 | 64.3972 | 245.8729 | 0.1967 | 0.3575 |
| 1.1313 | 1.1667 | 244 | 0.8073 | 53.0086 | 33.9467 | 210.6469 | 0.1851 | 0.3293 |
| 0.54 | 2.1667 | 366 | 0.7915 | 25.4142 | 18.3008 | 196.4910 | 0.1906 | 0.3139 |
| 0.3761 | 3.1667 | 488 | 0.7882 | 24.2463 | 17.3425 | 196.9004 | 0.1675 | 0.3169 |
| 0.8462 | 4.1667 | 610 | 0.7921 | 23.4051 | 16.7178 | 197.5723 | 0.1661 | 0.3141 |
| 0.9957 | 5.1667 | 732 | 0.7920 | 23.2484 | 16.5856 | 203.4514 | 0.1639 | 0.3157 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.2
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1 |
Wsassi/whisper-large-v3-scc22 | Wsassi | 2025-05-21T22:24:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-21T22:14:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pictgencustomer/danceparade_107 | pictgencustomer | 2025-05-21T22:21:24Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-21T22:21:13Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: danceparade_5
---
# Danceparade_107
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `danceparade_5` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgencustomer/danceparade_107', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
anonymousneurips008/empiar10166-ddpm-ema-cryoem-128x128 | anonymousneurips008 | 2025-05-21T22:19:32Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2025-05-20T19:26:59Z | ---
license: mit
library_name: diffusers
---
DDPM trained on EMPIAR10166 training dataset with 190,904 images of size 128x128 |
Flaviomm01/Lagoon01 | Flaviomm01 | 2025-05-21T22:01:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-21T22:01:01Z | ---
license: apache-2.0
---
|
bobby97/flux-fill-stain-5-lora | bobby97 | 2025-05-21T21:49:16Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-20T08:01:38Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A TOK dark-mark
widget:
- text: A TOK dark-mark
output:
url: image_0.png
- text: A TOK dark-mark
output:
url: image_1.png
- text: A TOK dark-mark
output:
url: image_2.png
- text: A TOK dark-mark
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - bobby97/flux-fill-stain-5-lora
<Gallery />
## Model description
These are bobby97/flux-fill-stain-5-lora DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `A TOK dark-mark` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](bobby97/flux-fill-stain-5-lora/tree/main) in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('bobby97/flux-fill-stain-5-lora', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A TOK dark-mark').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
leeccNLPLAB/unsloth_Qwen3-4B-unsloth-bnb-4bit-BookSQL | leeccNLPLAB | 2025-05-21T21:36:43Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-21T21:32:07Z | ---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leeccNLPLAB
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
limingcv/KuaiShou_MPS | limingcv | 2025-05-21T20:42:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-21T20:38:18Z | ---
license: apache-2.0
---
This model is the MPS model from https://github.com/Kwai-Kolors/MPS?tab=readme-ov-file |
morturr/Mistral-7B-v0.1-amazon-seed-42-2025-05-21 | morturr | 2025-05-21T17:49:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-21T09:54:07Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-amazon-seed-42-2025-05-21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-amazon-seed-42-2025-05-21
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
gavrilstep/95409e72-553b-407c-8724-3b48ac7fb3b9 | gavrilstep | 2025-05-21T16:52:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-21T16:41:06Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 95409e72-553b-407c-8724-3b48ac7fb3b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 8238689af7edb3c9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8238689af7edb3c9_train_data.json
type:
field_instruction: system
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: gavrilstep/95409e72-553b-407c-8724-3b48ac7fb3b9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.01
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/8238689af7edb3c9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dd656613-2166-41f4-8840-76ceb5e9b641
wandb_project: s56-7
wandb_run: your_name
wandb_runid: dd656613-2166-41f4-8840-76ceb5e9b641
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 95409e72-553b-407c-8724-3b48ac7fb3b9
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2883 | 0.0098 | 150 | 1.8758 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ilybawkugo/lora_qwen_2e-4-1616-1024 | ilybawkugo | 2025-05-21T16:48:12Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-21T15:46:58Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ilybawkugo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DanielNRU/pollen-ner-1600 | DanielNRU | 2025-05-21T16:28:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
]
| null | 2025-05-20T16:03:39Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-1600
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-1600
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1443
- Precision: 0.8593
- Recall: 0.9076
- F1: 0.8828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 200 | 0.1436 | 0.8462 | 0.9056 | 0.8749 |
| No log | 2.0 | 400 | 0.1407 | 0.8550 | 0.8996 | 0.8767 |
| 0.2058 | 3.0 | 600 | 0.1443 | 0.8593 | 0.9076 | 0.8828 |
| 0.2058 | 4.0 | 800 | 0.1405 | 0.8555 | 0.9036 | 0.8789 |
| 0.1935 | 5.0 | 1000 | 0.1432 | 0.8593 | 0.9076 | 0.8828 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
batmangiaicuuthegioi/bi-encoders-embeddings | batmangiaicuuthegioi | 2025-05-21T16:16:44Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:37059",
"loss:MultipleNegativesRankingLoss",
"dataset:batmangiaicuuthegioi/zalo-legal-triplets",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:AITeamVN/Vietnamese_Embedding",
"base_model:finetune:AITeamVN/Vietnamese_Embedding",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-21T16:15:28Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:37059
- loss:MultipleNegativesRankingLoss
base_model: AITeamVN/Vietnamese_Embedding
widget:
- source_sentence: Quแบฃn lรฝ vร sแปญ dแปฅng phรญ bแบฃo vแป mรดi trฦฐแปng ฤแปi vแปi nฦฐแปc thแบฃi cรดng
nghiแปp ฤฦฐแปฃc quy ฤแปnh ra sao?
sentences:
- 'ฤiแปu 16. Trรกch nhiแปm cแปงa Uแปท ban nhรขn dรขn cแบฅp huyแปn, cแบฅp xรฃ nฦกi cรณ ฤรช. ฤiแปm c)
trang bแป vร hฦฐแปng dแบซn viแปc quแบฃn lรฝ sแปญ dแปฅng cรกc dแปฅng cแปฅ, sแป sรกch cho cรกc ฤแปi tuแบงn
tra, canh gรกc ฤรช theo quy ฤแปnh tแบกi khoแบฃn 2 ฤiแปu 6 cแปงa thรดng tฦฐ nร y. '
- ฤiแปu 33. Quแบฃn lรฝ tร i khoแบฃn, tร i sแบฃn kรฝ quแปน cแปงa thร nh viรชn bรน trแปซ. khoแบฃn 6. loแบกi
kรฝ quแปน, phฦฐฦกng phรกp xรกc ฤแปnh mแปฉc kรฝ quแปน, phฦฐฦกng thแปฉc kรฝ quแปน, thแปi hแบกn kรฝ quแปน,
bแป sung kรฝ quแปน, chuyแปn giao tร i sแบฃn kรฝ quแปน, phฦฐฦกng thแปฉc ฤแปnh giรก tร i sแบฃn kรฝ quแปน,
xรกc ฤแปnh lรฃi lแป vแป thแบฟ, hoแบกt ฤแปng quแบฃn lรฝ tร i khoแบฃn vร tร i sแบฃn kรฝ quแปน cแปงa thร nh
viรชn bรน trแปซ thแปฑc hiแปn theo quy ฤแปnh cแปงa bแป trฦฐแปng bแป tร i chรญnh vร quy chแบฟ cแปงa
tแปng cรดng ty lฦฐu kรฝ vร bรน trแปซ chแปฉng khoรกn viแปt nam.
- ฤiแปu 4. Nguyรชn tแบฏc quแบฃn lรฝ vร sแปญ dแปฅng phรญ. khoแบฃn 3. phรญ thu tแปซ cรกc hoแบกt ฤแปng
dแปch vแปฅ do tแป chแปฉc ฤฦฐแปฃc cฦก quan nhร nฦฐแปc cรณ thแบฉm quyแปn giao thแปฑc hiแปn ฤฦฐแปฃc ฤแป
lแบกi mแปt phแบงn hoแบทc toร n bแป sแป tiแปn phรญ thu ฤฦฐแปฃc ฤแป trang trแบฃi chi phรญ hoแบกt ฤแปng
cung cแบฅp dแปch vแปฅ, thu phรญ ฤฦฐแปฃc xรกc ฤแปnh theo quy ฤแปnh tแบกi ฤiแปu 5 nghแป ฤแปnh nร y;
phแบงn cรฒn lแบกi (nแบฟu cรณ) nแปp ngรขn sรกch nhร nฦฐแปc, trแปซ trฦฐแปng hแปฃp chรญnh phแปง cรณ quy
ฤแปnh khรกc thรฌ thแปฑc hiแปn theo quy ฤแปnh cแปงa chรญnh phแปง. sแป tiแปn phรญ ฤฦฐแปฃc ฤแป lแบกi lร
doanh thu cแปงa tแป chแปฉc thu phรญ.
- source_sentence: Ngร y bแบงu cแปญ ฤแบกi biแปu Quแปc Hแปi cรณ phแบฃi lร ngร y chแปง nhแบญt?
sentences:
- 'ฤiแปu 16. Cแปญ quแปc thiแปu nฦฐแปc Cแปng hรฒa xรฃ hแปi chแปง nghฤฉa Viแปt Nam. khoแบฃn 1. quแปc
thiแปu viแปt nam ฤฦฐแปฃc cแปญ trong cรกc cuแปc mรญt tinh, chiรชu ฤรฃi chร o mแปซng quแปc khรกnh,
ngร y lแป
lแปn cแปงa viแปt nam hoแบทc kแปท niแปm sแปฑ kiแปn quan trแปng trong quan hแป giแปฏa viแปt
nam vแปi quแปc gia hay tแป chแปฉc quแปc tแบฟ tiแบฟp nhแบญn phรน hแปฃp vแปi quy ฤแปnh, thรดng lแป
lแป
tรขn cแปงa quแปc gia, tแป chแปฉc quแปc tแบฟ tiแบฟp nhแบญn. '
- 'ฤiแปu 4. Giแบฃi thรญch tแปซ ngแปฏ. khoแบฃn 36. quแบฃn lรฝ quแปน ฤแบงu tฦฐ chแปฉng khoรกn lร hoแบกt
ฤแปng quแบฃn lรฝ trong viแปc mua, bรกn, nแบฏm giแปฏ chแปฉng khoรกn vร cรกc tร i sแบฃn khรกc cแปงa
quแปน ฤแบงu tฦฐ chแปฉng khoรกn. '
- 'ฤiแปu 52. Giแปi thiแปu ngฦฐแปi cแปงa cฦก quan, tแป chแปฉc, ฤฦกn vแป แปฉng cแปญ ฤแบกi biแปu Hแปi ฤแปng
nhรขn dรขn. khoแบฃn 4. ban cรดng tรกc mแบทt trแบญn แป thรดn, tแป dรขn phแป dแปฑ kiแบฟn ngฦฐแปi cแปงa
thรดn, tแป dรขn phแป ฤแป giแปi thiแปu แปฉng cแปญ ฤแบกi biแปu hแปi ฤแปng nhรขn dรขn cแบฅp xรฃ vร phแปi
hแปฃp vแปi trฦฐแปng thรดn, tแป trฦฐแปng tแป dรขn phแป tแป chแปฉc hแปi nghแป cแปญ tri ฤแป thแบฃo luแบญn,
giแปi thiแปu ngฦฐแปi แปฉng cแปญ ฤแบกi biแปu hแปi ฤแปng nhรขn dรขn cแบฅp xรฃ. viแปc giแปi thiแปu ngฦฐแปi
แปฉng cแปญ ฤแบกi biแปu hแปi ฤแปng nhรขn dรขn cแบฅp xรฃ แป thรดn, tแป dรขn phแป do แปงy ban thฦฐแปng vแปฅ
quแปc hแปi hฦฐแปng dแบซn; '
- source_sentence: Nghiรชn cแปฉu y sinh hแปc ฤa trung tรขm lร gรฌ?
sentences:
- 'ฤiแปu 64. Vi phแบกm quy ฤแปnh vแป cung cแบฅp, sแปญ dแปฅng thiแบฟt bแป vรด tuyแบฟn ฤiแปn ฤฦฐแปฃc miแป
n
Giแบฅy phรฉp sแปญ dแปฅng tแบงn sแป vรด tuyแบฟn ฤiแปn. khoแบฃn 2. phแบกt tiแปn tแปซ < mแปฉc phแบกt tiแปn
> ฤแบฟn < mแปฉc phแบกt tiแปn > ฤแปi vแปi hร nh vi sแบฃn xuแบฅt hoแบทc nhแบญp khแบฉu thiแบฟt bแป vรด tuyแบฟn
ฤiแปn thuแปc danh mแปฅc thiแบฟt bแป vรด tuyแบฟn ฤiแปn ฤฦฐแปฃc miแป
n giแบฅy phรฉp sแปญ dแปฅng tแบงn sแป
vรด tuyแบฟn ฤiแปn nhฦฐng khรดng thแปฑc hiแปn chแปฉng nhแบญn vร cรดng bแป hแปฃp quy trฦฐแปc khi ฤฦฐa
vร o lฦฐu thรดng trรชn thแป trฦฐแปng. '
- 'ฤiแปu 3. Giแบฃi thรญch tแปซ ngแปฏ. khoแบฃn 19. nguy cฦก (risk) lร xรกc suแบฅt mร mแปt sแปฑ kiแปn
hoแบทc kแบฟt quแบฃ thuแบญn lแปฃi hay bแบฅt lแปฃi xแบฃy ra trong mแปt khoแบฃng thแปi gian xรกc ฤแปnh
cแปงa nghiรชn cแปฉu theo tiแบฟp cแบญn cแปงa dแปch tแป
. '
- 'ฤiแปu 9. Nแปi dung tuแบงn tra, canh gรกc ฤรช. ฤiแปm d) mแปi kรญp tuแบงn tra phแบฃi kiแปm tra
vฦฐแปฃt quรก phแบกm vi phแปฅ trรกch vแป hai phรญa, mแปi phรญa 50m. ฤแปi vแปi nhแปฏng khu vแปฑc ฤรฃ
tแปซng xแบฃy ra sแปฑ cแป hฦฐ hแปng, phแบฃi kiแปm tra quan sรกt rแปng hฦกn ฤแป phรกt hiแปn sแปฑ cแป. '
- source_sentence: Khรดng treo biแปn thรดng bรกo khรดng bรกn thuแปc lรก cho ngฦฐแปi dฦฐแปi 18
tuแปi phแบกt 1 triแปu ฤฦฐแปฃc quy ฤแปnh nhฦฐ thแบฟ nร o?
sentences:
- 'ฤiแปu 49. Hร nh vi vi phแบกm vแป ฤฤng kรฝ hแปฃp ฤแปng theo mแบซu, ฤiแปu kiแปn giao dแปch chung. ฤiแปm
c) khรดng รกp dแปฅng ฤรบng hแปฃp ฤแปng theo mแบซu, ฤiแปu kiแปn giao dแปch chung ฤรฃ ฤฤng kรฝ
vแปi cฦก quan quแบฃn lรฝ nhร nฦฐแปc cรณ thแบฉm quyแปn vแป bแบฃo vแป quyแปn lแปฃi ngฦฐแปi tiรชu dรนng
theo quy ฤแปnh. '
- ฤiแปu 15. Khen thฦฐแปng, kแปท Luแบญt. khoแบฃn 2. nhแปฏng ฤฦกn vแป vร cรก nhรขn vi phแบกm quy ฤแปnh
tแบกi thรดng tฦฐ nร y tuแปณ theo lแปi nแบทng nhแบน sแบฝ bแป thi hร nh kแปท luแบญt tแปซ cแบฃnh cรกo ฤแบฟn
truy tแป trฦฐแปc phรกp luแบญt cแปงa nhร nฦฐแปc.
- 'ฤiแปu 81. Tฦฐแปc quyแปn sแปญ dแปฅng giแบฅy phรฉp, chแปฉng chแป hร nh nghแป cรณ thแปi hแบกn hoแบทc ฤรฌnh
chแป hoแบกt ฤแปng cรณ thแปi hแบกn trong lฤฉnh vแปฑc giao thรดng ฤฦฐแปng bแป, ฤฦฐแปng sแบฏt. khoแบฃn
5. trฦฐแปng hแปฃp ngฦฐแปi cรณ hร nh vi vi phแบกm bแป รกp dแปฅng hรฌnh thแปฉc xแปญ phแบกt tฦฐแปc quyแปn
sแปญ dแปฅng giแบฅy phรฉp, chแปฉng chแป hร nh nghแป nhฦฐng thแปi hแบกn sแปญ dแปฅng cรฒn lแบกi cแปงa giแบฅy
phรฉp, chแปฉng chแป hร nh nghแป ฤรณ รญt hฦกn thแปi hแบกn bแป tฦฐแปc thรฌ ngฦฐแปi cรณ thแบฉm quyแปn vแบซn
ra quyแบฟt ฤแปnh xแปญ phแบกt cรณ รกp dแปฅng hรฌnh thแปฉc tฦฐแปc quyแปn sแปญ dแปฅng giแบฅy phรฉp, chแปฉng
chแป hร nh nghแป theo quy ฤแปnh ฤแปi vแปi hร nh vi vi phแบกm. trong thแปi gian bแป tฦฐแปc quyแปn
sแปญ dแปฅng giแบฅy phรฉp, chแปฉng chแป hร nh nghแป, cรก nhรขn, tแป chแปฉc khรดng ฤฦฐแปฃc lร m thแปง tแปฅc
cแบฅp ฤแปi, cแบฅp mแปi giแบฅy phรฉp, chแปฉng chแป hร nh nghแป. '
- source_sentence: Quy ฤแปnh vแป trao ฤแปi dแปฏ liแปu thi hร nh รกn hรฌnh sแปฑ ฤฦฐแปฃc quy ฤแปnh
nhฦฐ thแบฟ nร o?
sentences:
- ฤiแปu 13. Quy ฤแปnh vแป bร n giao giแปฏa cรกc kรญp trแปฑc. sau mแปi ฤแปฃt kiแปm tra, cรกc kรญp
tuแบงn tra, canh gรกc ฤรช phแบฃi ghi chรฉp ฤแบงy ฤแปง tรฌnh hรฌnh diแป
n biแบฟn vร hฦฐ hแปng ฤรช ฤiแปu
vร o sแป nhแบญt kรฝ tuแบงn tra, canh gรกc theo mแบซu quy ฤแปnh vร bร n giao ฤแบงy ฤแปง cho kรญp
sau. ngฦฐแปi thay mแบทt kรญp giao vร nhแบญn phแบฃi kรฝ vร ghi rรต hแป tรชn, ngร y giแป vร o sแป.
sau mแปi ngร y ฤแปi trฦฐแปng vร cรกn bแป chuyรชn trรกch quแบฃn lรฝ ฤรช ฤiแปu kรฝ xรกc nhแบญn tรฌnh
hรฌnh trong ngร y ฤแป theo dรตi vร lร m cฦก sแป cho viแปc chi trแบฃ thรน lao theo quy ฤแปnh.
- 'ฤiแปu 33. Bรกo cรกo cแปงa tแป chแปฉc tฦฐ vแบฅn hแป sฦก chร o bรกn trรกi phiแบฟu, tแป chแปฉc ฤแบฅu thแบงu,
bแบฃo lรฃnh, ฤแบกi lรฝ phรกt hร nh, tแป chแปฉc ฤฤng kรฝ, lฦฐu kรฝ trรกi phiแบฟu vร Sแป giao dแปch
chแปฉng khoรกn. ฤiแปm b) ngoร i chแบฟ ฤแป bรกo cรกo ฤแปnh kแปณ theo quy ฤแปnh tแบกi ฤiแปm a khoแบฃn
nร y, sแป giao dแปch chแปฉng khoรกn bรกo cรกo ฤแปt xuแบฅt cho แปงy ban chแปฉng khoรกn nhร nฦฐแปc
vร bแป tร i chรญnh theo yรชu cแบงu cแปงa cฦก quan quแบฃn lรฝ. '
- 'ฤiแปu 12. Trao ฤแปi dแปฏ liแปu giแปฏa cฦก sแป dแปฏ liแปu vแป thi hร nh รกn hรฌnh sแปฑ vร cรกc cฦก
sแป dแปฏ liแปu khรกc liรชn quan. khoแบฃn 1. viแปc trao ฤแปi dแปฏ liแปu giแปฏa cฦก sแป dแปฏ liแปu
vแป thi hร nh รกn hรฌnh sแปฑ vร cรกc cฦก sแป dแปฏ liแปu khรกc liรชn quan phแบฃi thแปฑc hiแปn theo
quy ฤแปnh cแปงa phรกp luแบญt vร quy ฤแปnh cแปงa bแป cรดng an, bแป quแปc phรฒng. '
datasets:
- batmangiaicuuthegioi/zalo-legal-triplets
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on AITeamVN/Vietnamese_Embedding
results:
- task:
type: triplet
name: Triplet
dataset:
name: zalo legal
type: zalo_legal
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
---
# SentenceTransformer based on AITeamVN/Vietnamese_Embedding
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [AITeamVN/Vietnamese_Embedding](https://huggingface.co/AITeamVN/Vietnamese_Embedding) on the [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [AITeamVN/Vietnamese_Embedding](https://huggingface.co/AITeamVN/Vietnamese_Embedding) <!-- at revision 9f671cc30908f1d851787efcc05b7d15bad8b615 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("batmangiaicuuthegioi/bi-encoders-embeddings")
# Run inference
sentences = [
'Quy ฤแปnh vแป trao ฤแปi dแปฏ liแปu thi hร nh รกn hรฌnh sแปฑ ฤฦฐแปฃc quy ฤแปnh nhฦฐ thแบฟ nร o?',
'ฤiแปu 12. Trao ฤแปi dแปฏ liแปu giแปฏa cฦก sแป dแปฏ liแปu vแป thi hร nh รกn hรฌnh sแปฑ vร cรกc cฦก sแป dแปฏ liแปu khรกc liรชn quan. khoแบฃn 1. viแปc trao ฤแปi dแปฏ liแปu giแปฏa cฦก sแป dแปฏ liแปu vแป thi hร nh รกn hรฌnh sแปฑ vร cรกc cฦก sแป dแปฏ liแปu khรกc liรชn quan phแบฃi thแปฑc hiแปn theo quy ฤแปnh cแปงa phรกp luแบญt vร quy ฤแปnh cแปงa bแป cรดng an, bแป quแปc phรฒng. ',
'ฤiแปu 13. Quy ฤแปnh vแป bร n giao giแปฏa cรกc kรญp trแปฑc. sau mแปi ฤแปฃt kiแปm tra, cรกc kรญp tuแบงn tra, canh gรกc ฤรช phแบฃi ghi chรฉp ฤแบงy ฤแปง tรฌnh hรฌnh diแป
n biแบฟn vร hฦฐ hแปng ฤรช ฤiแปu vร o sแป nhแบญt kรฝ tuแบงn tra, canh gรกc theo mแบซu quy ฤแปnh vร bร n giao ฤแบงy ฤแปง cho kรญp sau. ngฦฐแปi thay mแบทt kรญp giao vร nhแบญn phแบฃi kรฝ vร ghi rรต hแป tรชn, ngร y giแป vร o sแป. sau mแปi ngร y ฤแปi trฦฐแปng vร cรกn bแป chuyรชn trรกch quแบฃn lรฝ ฤรช ฤiแปu kรฝ xรกc nhแบญn tรฌnh hรฌnh trong ngร y ฤแป theo dรตi vร lร m cฦก sแป cho viแปc chi trแบฃ thรน lao theo quy ฤแปnh.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `zalo_legal`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Triplet
* Dataset: `zalo_legal`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### zalo-legal-triplets
* Dataset: [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets) at [15e0566](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets/tree/15e0566d390f73b5574a3d928cb8353cb6656fba)
* Size: 37,059 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 22.08 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 82.98 tokens</li><li>max: 344 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 76.65 tokens</li><li>max: 220 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Mแปฉc phแบกt ฤแปi vแปi hร nh vi ฤiแปu khiแปn xe mรกy dแบซn, dแบฏt theo sรบc vแบญt ?</code> | <code>ฤiแปu 63. Xแปญ phแบกt nhรขn viรชn ฤฦฐแปng sแบฏt trแปฑc tiแบฟp phแปฅc vแปฅ chแบกy tร u (trแปซ lรกi tร u vร phแปฅ lรกi tร u) vi phแบกm quy ฤแปnh vแป nแปng ฤแป cแปn hoแบทc sแปญ dแปฅng cรกc chแบฅt kรญch thรญch khรกc mร phรกp luแบญt cแบฅm sแปญ dแปฅng. ฤiแปm c) khi lร m nhiแปm vแปฅ mร trong cฦก thแป cรณ chแบฅt kรญch thรญch khรกc mร phรกp luแบญt cแบฅm sแปญ dแปฅng.</code> | <code>ฤiแปu 4. Nhiแปm vแปฅ cแปงa lแปฑc lฦฐแปฃng tuแบงn tra, canh gรกc ฤรช. khoแบฃn 5. ฤeo phรน hiแปu khi lร m nhiแปm vแปฅ.</code> |
| <code>Theo quy ฤแปnh phรกp luแบญt, dแบซn xuแบฅt cแปงa cรกc loร i ฤแปng vแบญt, thแปฑc vแบญt lร gรฌ?</code> | <code>ฤiแปu 3. Giแบฃi thรญch tแปซ ngแปฏ. khoแบฃn 26. mแบซu vแบญt sฤn bแบฏt lร mแบซu vแบญt cรณ ฤฦฐแปฃc tแปซ cรกc hoแบกt ฤแปng sฤn bแบฏt hแปฃp phรกp. </code> | <code>ฤiแปu 17. Trรกch nhiแปm cแปงa Sแป Nรดng nghiแปp vร Phรกt triแปn nรดng thรดn. khoแบฃn 3. khi cรณ bรกo ฤแปng lลฉ tแปซ cแบฅp i trแป lรชn, sแป nรดng nghiแปp vร phรกt triแปn nรดng thรดn phแบฃi chแป ฤแบกo, tแป chแปฉc kiแปm tra, ฤรดn ฤแปc cรดng tรกc tuแบงn tra, canh gรกc แป cรกc tuyแบฟn ฤรช.</code> |
| <code>Mแปฅc tiรชu cแปงa giรกo dแปฅc nghแป nghiแปp tแปซ thรกng 7/2020 ฤฦฐแปฃc quy ฤแปnh nhฦฐ thแบฟ nร o?</code> | <code>ฤiแปu 36. Mแปฅc tiรชu cแปงa giรกo dแปฅc nghแป nghiแปp. giรกo dแปฅc nghแป nghiแปp nhแบฑm ฤร o tแบกo nhรขn lแปฑc trแปฑc tiแบฟp cho sแบฃn xuแบฅt, kinh doanh vร dแปch vแปฅ, cรณ nฤng lแปฑc hร nh nghแป tฦฐฦกng แปฉng vแปi trรฌnh ฤแป ฤร o tแบกo; cรณ ฤแบกo ฤแปฉc, sแปฉc khแปe; cรณ trรกch nhiแปm nghแป nghiแปp; cรณ khแบฃ nฤng sรกng tแบกo, thรญch แปฉng vแปi mรดi trฦฐแปng hแปi nhแบญp quแปc tแบฟ; bแบฃo ฤแบฃm nรขng cao nฤng suแบฅt, chแบฅt lฦฐแปฃng lao ฤแปng; tแบกo ฤiแปu kiแปn cho ngฦฐแปi hแปc sau khi hoร n thร nh khรณa hแปc cรณ khแบฃ nฤng tรฌm viแปc lร m, tแปฑ tแบกo viแปc lร m hoแบทc hแปc trรฌnh ฤแป cao hฦกn.</code> | <code>ฤiแปu 3. Tiรชu chuแบฉn cแปงa cรกc thร nh viรชn thuแปc lแปฑc lฦฐแปฃng tuแบงn tra, canh gรกc ฤรช. khoแบฃn 2. cรณ tinh thแบงn trรกch nhiแปm, chแปu ฤแปฑng gian khแป, khแบฏc phแปฅc khรณ khฤn, quen sรดng nฦฐแปc vร biแบฟt bฦกi, cรณ kiแบฟn thแปฉc, kinh nghiแปm hแป ฤรช, phรฒng, chแปng lแปฅt, bรฃo.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### zalo-legal-triplets
* Dataset: [zalo-legal-triplets](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets) at [15e0566](https://huggingface.co/datasets/batmangiaicuuthegioi/zalo-legal-triplets/tree/15e0566d390f73b5574a3d928cb8353cb6656fba)
* Size: 37,059 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 21.7 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 79.22 tokens</li><li>max: 327 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 74.1 tokens</li><li>max: 220 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Nghiรชn cแปฉu y sinh hแปc liรชn quan ฤแบฟn con ngฦฐแปi lร gรฌ?</code> | <code>ฤiแปu 31. Thแบฉm ฤแปnh nghiรชn cแปฉu theo quy trรฌnh rรบt gแปn. khoแบฃn 4. ngoแบกi trแปซ trฦฐแปng hแปฃp hแปp khแบฉn cแบฅp, tแบฅt cแบฃ tร i liแปu ฤแป nghแป xem xรฉt phแบฃi ฤฦฐแปฃc gแปญi tแปi thร nh viรชn hแปi ฤแปng ฤแบกo ฤแปฉc ฤฦฐแปฃc phรขn cรดng nhแบญn xรฉt trฦฐแปc รญt nhแบฅt 05 ngร y lร m viแปc so vแปi ngร y yรชu cแบงu gแปญi lแบกi phiแบฟu nhแบญn xรฉt, ฤรกnh giรก nghiรชn cแปฉu. </code> | <code>ฤiแปu 10. Nแปi dung tuแบงn tra canh gรกc cแปng qua ฤรช. khoแบฃn 2. ngฦฐแปi tuแบงn tra, canh gรกc phแบฃi kiแปm tra kแปน phแบงn tiแบฟp giรกp giแปฏa thรขn cแปng, tฦฐแปng cรกnh gร cแปงa cแปng vแปi ฤรช; cรกnh cแปng, bแป phแบญn ฤรณng mแป cรกnh cแปng, cแปญa cแปng, thรขn cแปng vร khu vแปฑc thฦฐแปฃng, hแบก lฦฐu cแปng ฤแป phรกt hiแปn kแปp thแปi nhแปฏng sแปฑ cแป xแบฃy ra. </code> |
| <code>Hแป sฦก cแบฅp lแบกi Giแบฅy chแปฉng nhแบญn ฤแปง ฤiแปu kiแปn hoแบกt ฤแปng dแปch vแปฅ giรกm ฤแปnh cรดng nghแป bao gแปm nhแปฏng giแบฅy tแป gรฌ?</code> | <code>ฤiแปu 38. Hแป sฦก cแบฅp Giแบฅy chแปฉng nhแบญn ฤแปง ฤiแปu kiแปn hoแบกt ฤแปng dแปch vแปฅ giรกm ฤแปnh cรดng nghแป. ฤiแปm e) mแบซu chแปฉng thฦฐ giรกm ฤแปnh cแปงa tแป chแปฉc. </code> | <code>ฤiแปu 6. Trang bแป dแปฅng cแปฅ, sแป sรกch. khoแบฃn 7. viแปc giao nhแบญn cรกc dแปฅng cแปฅ vร sแป sรกch trรชn ฤรขy phแบฃi ฤฦฐแปฃc lแบญp biรชn bแบฃn ฤแป quแบฃn lรฝ, theo dรตi.</code> |
| <code>Chแบกy quรก tแปc ฤแป bao nhiรชu km thรฌ xe รด tรด sแบฝ bแป giam bแบฑng?</code> | <code>ฤiแปu 55. Xแปญ phแบกt cรกc hร nh vi vi phแบกm quy ฤแปnh quแบฃn lรฝ, bแบฃo trรฌ kแบฟt cแบฅu hแบก tแบงng ฤฦฐแปng sแบฏt. ฤiแปm b) thแปฑc hiแปn hร nh vi quy ฤแปnh tแบกi ฤiแปm c khoแบฃn 3 ฤiแปu nร y buแปc phแบฃi tแป chแปฉc sแปญa chแปฏa, bแป sung, gia cแป, thay thแบฟ cรกc hฦฐ hแปng kแบฟt cแบฅu hแบก tแบงng ฤฦฐแปng sแบฏt ฤแป bแบฃo ฤแบฃm chแบฅt lฦฐแปฃng theo cรดng lแปnh tแปc ฤแป, cรดng lแปnh tแบฃi trแปng ฤรฃ cรดng bแป.</code> | <code>ฤiแปu 9. Nแปi dung tuแบงn tra, canh gรกc ฤรช. ฤiแปm d) mแปi kรญp tuแบงn tra phแบฃi kiแปm tra vฦฐแปฃt quรก phแบกm vi phแปฅ trรกch vแป hai phรญa, mแปi phรญa 50m. ฤแปi vแปi nhแปฏng khu vแปฑc ฤรฃ tแปซng xแบฃy ra sแปฑ cแป hฦฐ hแปng, phแบฃi kiแปm tra quan sรกt rแปng hฦกn ฤแป phรกt hiแปn sแปฑ cแป. </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 2
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | zalo_legal_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:--------------------------:|
| 0.3084 | 2000 | 0.2978 | 0.0778 | 0.9996 |
| 0.6167 | 4000 | 0.1735 | 0.0522 | 1.0 |
| 0.9251 | 6000 | 0.1148 | 0.0330 | 1.0 |
| 1.0 | 6486 | - | - | 1.0 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
fizziehaq/q_learn-taxi-v3 | fizziehaq | 2025-05-21T16:16:07Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-21T16:16:04Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_learn-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fizziehaq/q_learn-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
phospho-app/PAphospho-gr00t-tictactoe-A1-orange-50010 | phospho-app | 2025-05-21T16:02:21Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
]
| null | 2025-05-21T15:59:41Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 229, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1067, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 967, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/normalization.py", line 217, in forward
return F.layer_norm(
^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/functional.py", line 2900, in layer_norm
return torch.layer_norm(
^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 24.75 MiB is free. Process 64 has 79.22 GiB memory in use. Of the allocated memory 78.46 GiB is allocated by PyTorch, and 266.39 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/1560 [00:09<?, ?it/s]
The current batch size is too large for the GPU.
Please consider lowering it to fit in the memory.
We train on a 80GB A100 GPU.
```
## Training parameters:
- **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 128
- **Training steps**: None
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
DanielNRU/pollen-ner-1250 | DanielNRU | 2025-05-21T15:40:29Z | 2 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:adapter:DeepPavlov/rubert-base-cased",
"region:us"
]
| null | 2025-05-20T11:29:05Z | ---
library_name: peft
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-1250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-1250
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1455
- Precision: 0.8614
- Recall: 0.9237
- F1: 0.8915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 157 | 0.1455 | 0.8614 | 0.9237 | 0.8915 |
| No log | 2.0 | 314 | 0.1406 | 0.8625 | 0.9197 | 0.8902 |
| No log | 3.0 | 471 | 0.1420 | 0.8596 | 0.9217 | 0.8895 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.0
- Tokenizers 0.21.1 |
magnifi/parser_user_v42a_epoch_6_lr_0p002_awq | magnifi | 2025-05-21T15:15:50Z | 0 | 0 | null | [
"safetensors",
"mistral",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
]
| null | 2025-05-21T15:12:04Z | ---
license: apache-2.0
---
|
Bubobot/ppo-SnowballTarget | Bubobot | 2025-05-21T13:58:34Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-05-21T13:58:28Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Bubobot/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
dzanbek/079fc0df-2610-4d3e-8436-088f5165247d | dzanbek | 2025-05-21T13:54:11Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-21T13:35:42Z | ---
base_model: unsloth/mistral-7b-instruct-v0.2
library_name: transformers
model_name: 079fc0df-2610-4d3e-8436-088f5165247d
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for 079fc0df-2610-4d3e-8436-088f5165247d
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dzanbek/079fc0df-2610-4d3e-8436-088f5165247d", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/seu8ok1q)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
frogdrawguess/Qwen-7B-Chat-4bit | frogdrawguess | 2025-05-21T13:05:20Z | 0 | 0 | null | [
"safetensors",
"qwen",
"custom_code",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-21T09:03:12Z | ---
license: apache-2.0
---
|
xw17/Llama-3.2-3B-Instruct_finetuned_2_optimized1_task_grouping_off_FT | xw17 | 2025-05-21T12:18:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-21T12:15:38Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DeepGlint-AI/MLCD-Seg | DeepGlint-AI | 2025-05-21T11:49:39Z | 22 | 7 | null | [
"safetensors",
"qwen2",
"custom_code",
"base_model:DeepGlint-AI/MLCD-Embodied-7B",
"base_model:finetune:DeepGlint-AI/MLCD-Embodied-7B",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-14T10:19:53Z | ---
license: apache-2.0
base_model:
- DeepGlint-AI/MLCD-Embodied-7B
---
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcocog?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcoco-5?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcoco-3?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcocog-1?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcoco-8?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcoco-4?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcoco-9?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcoco?p=multi-label-cluster-discrimination-for-visual)
[](https://paperswithcode.com/sota/referring-expression-segmentation-on-refcoco?p=multi-label-cluster-discrimination-for-visual)
## RefCOCO Segmentation Evaluation:
| Dataset | Split | MLCD-seg-7B | EVF-SAM | GLaMM | VisionLLM v2| LISA |
| :-- | :-: | :-: | :-: | :-: | :-: | :-: |
| RefCOCO | val | **83.6** | 82.4 | 79.5 | 79.2 | 74.9 |
| RefCOCO | testA | **85.3** | 84.2 | 83.2 | 82.3 | 79.1 |
| RefCOCO | testB | **81.5** | 80.2 | 76.9 | 77.0 | 72.3 |
| RefCOCO+ | val | **79.4** | 76.5 | 72.6 | 68.9 | 65.1 |
| RefCOCO+ | testA | **82.9** | 80.0 | 78.7 | 75.8 | 70.8 |
| RefCOCO+ | testB | **75.6** | 71.9 | 64.6 | 61.8 | 58.1 |
| RefCOCOg | val | **79.7** | 78.2 | 74.2 | 73.3 | 67.9 |
| RefCOCOg | test | **80.5** | 78.3 | 74.9 | 74.8 | 70.6 |
## Evaluation
If you just want to use this code, please refer to this sample below
```python
from transformers import AutoModel, AutoTokenizer
from PIL import Image
model_path = "DeepGlint-AI/MLCD-Seg" # or use your local path
mlcd_seg = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.float16,
trust_remote_code=True
).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Assuming you have an image named test.jpg
seg_img = Image.open("test.jpg").convert('RGB')
seg_prompt = "Could you provide a segmentation mask for the right giraffe in this image?"
pred_mask = model.seg(seg_img, seg_prompt, tokenizer, force_seg=False)
```
If you want to use this code measurement dataset (e.g. refcoco), then you need to use the following method
```python
from transformers import AutoModel, AutoTokenizer
from PIL import Image
model_path = "DeepGlint-AI/MLCD-Seg" # or use your local path
mlcd_seg = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.float16,
trust_remote_code=True
).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Assuming you have an image named test.jpg
seg_img = Image.open("test.jpg").convert('RGB')
seg_prompt = "Could you provide a segmentation mask for the right giraffe in this image?"
pred_mask = model.seg(seg_img, seg_prompt, tokenizer, force_seg=True)
```
If you want to use this code in video, please refer to this sample below
```python
from transformers import AutoModel, AutoTokenizer
from PIL import Image
import torch
from torchvision import transforms
import subprocess
import os
# video path
video_path = "updownfunk.mp4"
input_dir = "frames"
output_dir = "mask_frames"
os.makedirs(input_dir, exist_ok=True)
os.makedirs(output_dir, exist_ok=True)
# assert you have ffmpeg installed, mp4 -> jpg
cmd = [
"ffmpeg",
"-i", video_path,
"-vf", "fps=30", # 30FPS
"-qscale:v", "1",
os.path.join(input_dir, "frame_%04d.jpg")
]
subprocess.run(cmd)
# model path
model_path = "DeepGlint-AI/MLCD-Seg" # or use your local path
mlcd_seg = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.float16,
trust_remote_code=True
).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# read jpgs
image_files = sorted([f for f in os.listdir(input_dir) if f.endswith(('.jpg', '.png', '.jpeg'))])
for idx, filename in enumerate(image_files, start=1):
src_path = os.path.join(input_dir, filename)
seg_img = Image.open(src_path).convert('RGB')
seg_prompt = "This <video> depicts a group of people dancing.\nCould you provide a segmentation mask for the man in pink suit?"
pred_mask = mlcd_seg.predict_forward(seg_img, seg_prompt, tokenizer, force_seg=True)
# Mask visualization
pred_mask = pred_mask.squeeze(0).cpu()
pred_mask = (pred_mask > 0.5).float()
img_tensor = transforms.ToTensor()(seg_img)
alpha = 0.2 # 20% transparency
red_mask = torch.tensor([0.0, 1.0, 0.0]).view(3, 1, 1).to(img_tensor.device) # green mask
black_bg = torch.zeros_like(img_tensor) # black background
masked_area = red_mask * alpha + img_tensor * (1 - alpha)
background = black_bg * alpha + img_tensor * (1 - alpha)
combined = torch.where(pred_mask.unsqueeze(0).bool(), masked_area, background)
combined = combined.cpu() # [3, H, W], CPU
# Save masked jpgs
new_name = f"{idx:04d}{os.path.splitext(filename)[1]}"
dst_path = os.path.join(output_dir, new_name)
transforms.ToPILImage()(combined.clamp(0, 1)).save(dst_path)
cmd = [
"ffmpeg",
"-y",
"-framerate", str(30), # fps
"-i", os.path.join(output_dir, "%04d.jpg"),
"-c:v", "libx264",
"-crf", str(23),
"-pix_fmt", "yuv420p",
"-vf", "fps=" + str(23),
"updownfunk_mask.mp4" # output video
]
# jpgs -> mp4
subprocess.run(cmd, check=True)
```
## Example
<img src="https://github.com/user-attachments/assets/85c023a1-3e0c-4ea5-a764-1eb9ee0fbddf" alt="output" width="1024"/>
<img src="https://github.com/user-attachments/assets/5b767327-bd0a-4185-8f7e-b1ab0aa260c9" alt="output" width="1024"/>
<video width="80%" controls>
<source src="https://github.com/user-attachments/assets/380dee0d-47c4-4e01-8ff0-e69e62cccd7c">
</video>
## Citations
```
@misc{mlcdseg_wukun,
author = {Wu, Kun and Xie, Yin and Zhou, Xinyu and An, Xiang, and Deng, Jiankang, and Jie, Yu},
title = {MLCD-Seg},
year = {2025},
url = {https://github.com/deepglint/unicom/tree/main/downstream},
}
```
|
Tri0315/Triyatno | Tri0315 | 2025-05-21T11:48:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-21T11:48:58Z | ---
license: apache-2.0
---
|
jmalejandrob79/nrmmtrfckd5k | jmalejandrob79 | 2025-05-21T11:48:09Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-21T09:40:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfckd5k
---
# Nrmmtrfckd5K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfckd5k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfckd5k",
"lora_weights": "https://huggingface.co/jmalejandrob79/nrmmtrfckd5k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/nrmmtrfckd5k', weight_name='lora.safetensors')
image = pipeline('nrmmtrfckd5k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jmalejandrob79/nrmmtrfckd5k/discussions) to add images that show off what youโve made with this LoRA.
|
danthepol/MNLP_M2_document_encoder | danthepol | 2025-05-21T11:39:33Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10481",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-21T11:38:49Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10481
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: What is a layer of saturated porous rock?
sentences:
- Current requires a source of voltage, which is a difference in electric potential
energy. Sources of voltage include chemical cells and solar cells.
- Give examples of energy conversions between potential and kinetic energy.
- An aquifer is a layer of saturated porous rock. It lies below the water table.
An impermeable layer, such as clay, is below the aquifer.
- source_sentence: What happens to gas solubility as the temperature increases?
sentences:
- A nonrenewable resource is one that cannot be replaced as easily as it is consumed.
Fossil fuels are an example of nonrenewable resources. They take millions of years
to form naturally, and so they cannot be replaced as fast as they are consumed.
To take the place of fossil fuel use, alternative energy resources are being developed.
These alternative energy sources often utilize renewable resources. The following
are examples of sustainable alternative energy resources:.
- Gas solubility decreases as the temperature increases.
- An electrolytic cell is the apparatus used for carrying out an electrolysis reaction.
In an electrolytic cell, electric current is applied to provide a source of electrons
for driving the reaction in a nonspontaneous direction. In a voltaic cell, the
reaction goes in a direction that releases electrons spontaneously. In an electrolytic
cell, the input of electrons from an external source forces the reaction to go
in the opposite direction.
- source_sentence: The sun and many other light sources produce waves that are randomly
this?
sentences:
- The Sun and many other light sources produce waves that are randomly polarized
(see Figure 27.39). Such light is said to be unpolarized because it is composed
of many waves with all possible directions of polarization. Polaroid materials,
invented by the founder of Polaroid Corporation, Edwin Land, act as a polarizing
slit for light, allowing only polarization in one direction to pass through. Polarizing
filters are composed of long molecules aligned in one direction. Thinking of the
molecules as many slits, analogous to those for the oscillating ropes, we can
understand why only light with a specific polarization can get through. The axis
of a polarizing filter is the direction along which the filter passes the electric
field of an EM wave (see Figure 27.40).
- When you look at the Moon from Earth, you notice dark and light areas. The maria
are dark, solid, flat areas of lava. Maria covers around 16% of the Moonโs surface,
mostly on the near side. The maria formed about 3.0 to 3.5 billion years ago,
when the Moon was continually bombarded by meteorites ( Figure below ). Large
meteorites broke through the Moonโs newly formed surface. This caused magma to
flow out and fill the craters. Scientists estimate volcanic activity on the Moon
ended about 1.2 billion years ago.
- The structures of the human eye collect and focus light. They form a reduced,
upside-down image on the retina at the back of the eye.
- source_sentence: The combined gradient that affects an ion includes its concentration
gradient and its what?
sentences:
- '5.3 Active Transport The combined gradient that affects an ion includes its concentration
gradient and its electrical gradient. A positive ion, for example, might tend
to diffuse into a new area, down its concentration gradient, but if it is diffusing
into an area of net positive charge, its diffusion will be hampered by its electrical
gradient. When dealing with ions in aqueous solutions, a combination of the electrochemical
and concentration gradients, rather than just the concentration gradient alone,
must be considered. Living cells need certain substances that exist inside the
cell in concentrations greater than they exist in the extracellular space. Moving
substances up their electrochemical gradients requires energy from the cell. Active
transport uses energy stored in ATP to fuel this transport. Active transport of
small molecular-sized materials uses integral proteins in the cell membrane to
move the materials: These proteins are analogous to pumps. Some pumps, which carry
out primary active transport, couple directly with ATP to drive their action.
In co-transport (or secondary active transport), energy from primary transport
can be used to move another substance into the cell and up its concentration gradient.'
- The development of new technology is called technological design . It is similar
to scientific investigation. Both processes use evidence and logic to solve problems.
- Oceans cover more than 70 percent of Earth's surface and hold 97 percent of its
surface water. Itโs no surprise that the oceans have a big influence on the planet.
The oceans affect the atmosphere, climate, and living things.
- source_sentence: What are are segmented invertebrates in phylum annelida called?
sentences:
- Simple Model of DNA. In this simple model of DNA, each line represents a nucleotide
chain. The double helix shape forms when the two chains wrap around the same axis.
- '38.2 Bone Bone, or osseous tissue, is connective tissue that includes specialized
cells, mineral salts, and collagen fibers. The human skeleton can be divided into
long bones, short bones, flat bones, and irregular bones. Compact bone tissue
is composed of osteons and forms the external layer of all bones. Spongy bone
tissue is composed of trabeculae and forms the inner part of all bones. Four types
of cells compose bony tissue: osteocytes, osteoclasts, osteoprogenitor cells,
and osteoblasts. Ossification is the process of bone formation by osteoblasts.
Intramembranous ossification is the process of bone development from fibrous membranes.
Endochondral ossification is the process of bone development from hyaline cartilage.
Long bones lengthen as chondrocytes divide and secrete hyaline cartilage. Osteoblasts
replace cartilage with bone. Appositional growth is the increase in the diameter
of bones by the addition of bone tissue at the surface of bones. Bone remodeling
involves the processes of bone deposition by osteoblasts and bone resorption by
osteoclasts. Bone repair occurs in four stages and can take several months.'
- Annelids are segmented invertebrates in Phylum Annelida. They include earthworms,
polychaete worms, and leeches. Annelids have a coelom and several organ systems.
Their body segments may have a variety of different structures such as tentacles
or suckers. Annelids may be predators, parasites, filter feeders, or decomposers.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What are are segmented invertebrates in phylum annelida called?',
'Annelids are segmented invertebrates in Phylum Annelida. They include earthworms, polychaete worms, and leeches. Annelids have a coelom and several organ systems. Their body segments may have a variety of different structures such as tentacles or suckers. Annelids may be predators, parasites, filter feeders, or decomposers.',
'38.2 Bone Bone, or osseous tissue, is connective tissue that includes specialized cells, mineral salts, and collagen fibers. The human skeleton can be divided into long bones, short bones, flat bones, and irregular bones. Compact bone tissue is composed of osteons and forms the external layer of all bones. Spongy bone tissue is composed of trabeculae and forms the inner part of all bones. Four types of cells compose bony tissue: osteocytes, osteoclasts, osteoprogenitor cells, and osteoblasts. Ossification is the process of bone formation by osteoblasts. Intramembranous ossification is the process of bone development from fibrous membranes. Endochondral ossification is the process of bone development from hyaline cartilage. Long bones lengthen as chondrocytes divide and secrete hyaline cartilage. Osteoblasts replace cartilage with bone. Appositional growth is the increase in the diameter of bones by the addition of bone tissue at the surface of bones. Bone remodeling involves the processes of bone deposition by osteoblasts and bone resorption by osteoclasts. Bone repair occurs in four stages and can take several months.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,481 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 17.94 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 100.79 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Vitamin d is made in the skin when it is exposed to what?</code> | <code>Vitamins are organic compounds that the body needs in small amounts to function properly. Humans need 16 different vitamins. Six of them are listed in Table below . Vitamin D is made in the skin when it is exposed to sunlight. Bacteria that normally live in the gut make vitamins B12 and K. All other vitamins must come from food. The table shows good food sources of the vitamins.</code> |
| <code>What is the process of the blastula forming 3 layers of cells called?</code> | <code>Gastrulation The typical blastula is a ball of cells. The next stage in embryonic development is the formation of the body plan. The cells in the blastula rearrange themselves spatially to form three layers of cells. This process is called gastrulation. During gastrulation, the blastula folds upon itself to form the three layers of cells. Each of these layers is called a germ layer and each germ layer differentiates into different organ systems. The three germs layers, shown in Figure 43.26, are the endoderm, the ectoderm, and the mesoderm. The ectoderm gives rise to the nervous system and the epidermis. The mesoderm gives rise to the muscle cells and connective tissue in the body. The endoderm gives rise to columnar cells found in the digestive system and many internal organs.</code> |
| <code>Microscopes were first developed in the early 1600s by this trade?</code> | <code>Microscopes were first developed in the early 1600s by eyeglass makers in The Netherlands and Denmark. The simplest compound microscope is constructed from two convex lenses as shown schematically in Figure 26.16. The first lens is called the objective lens, and has typical magnification values from 5ร to 100ร . In standard microscopes, the objectives are mounted such that when you switch between objectives, the sample remains in focus. Objectives arranged in this way are described as parfocal. The second, the eyepiece, also referred to as the ocular, has several lenses which slide inside a cylindrical barrel. The focusing ability is provided by the movement of both the objective lens and the eyepiece. The purpose of a microscope is to magnify small objects, and both lenses contribute to the final magnification. Additionally, the final enlarged image is produced in a location far enough from the observer to be easily viewed, since the eye cannot focus on objects or images that are too ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3814 | 500 | 0.0735 |
| 0.7628 | 1000 | 0.0541 |
| 1.1442 | 1500 | 0.0422 |
| 1.5256 | 2000 | 0.0198 |
| 1.9069 | 2500 | 0.0241 |
| 2.2883 | 3000 | 0.0127 |
| 2.6697 | 3500 | 0.0084 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.1
- PyTorch: 2.1.0+cu118
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
codewithRiz/janue2 | codewithRiz | 2025-05-21T11:20:33Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-21T11:19:39Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/janu_000980_00_20250521160357.png
text: riz1Jr is at beach
- output:
url: sample/janu_000980_01_20250521160441.png
text: riz1Jr is at rooftop
- output:
url: sample/janu_000980_02_20250521160526.png
text: riz1Jr driving car
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: riz1Jr
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# janu
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `riz1Jr` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
KashyapGobubble/Llama-3.2-3B-Instruct-grpo-20250507_095439-grpo-20250521_082548 | KashyapGobubble | 2025-05-21T11:16:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-21T11:13:58Z | ---
library_name: transformers
tags:
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Eric1227/Qwen2.5-Coder-32B-Instruct-MLX-8bit | Eric1227 | 2025-05-21T10:55:17Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"8-bit",
"region:us"
]
| text-generation | 2025-05-21T10:00:47Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: mlx
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
---
|
Munia-ak/speecht5_finetuned_voxpopuli_nl | Munia-ak | 2025-05-21T10:15:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2025-05-20T07:25:40Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5165 | 4.3098 | 1000 | 0.4802 |
| 0.4937 | 8.6197 | 2000 | 0.4677 |
| 0.4902 | 12.9295 | 3000 | 0.4617 |
| 0.4932 | 17.2410 | 4000 | 0.4605 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
shakamone/trellis-large | shakamone | 2025-05-21T09:05:41Z | 0 | 0 | trellis | [
"trellis",
"image-to-3d",
"en",
"arxiv:2412.01506",
"license:mit",
"region:us"
]
| image-to-3d | 2025-05-21T08:58:38Z | ---
library_name: trellis
pipeline_tag: image-to-3d
license: mit
language:
- en
---
# TRELLIS Image Large
<!-- Provide a quick summary of what the model is/does. -->
The image conditioned version of TRELLIS, a large 3D genetive model. It was introduced in the paper [Structured 3D Latents for Scalable and Versatile 3D Generation](https://huggingface.co/papers/2412.01506).
Project page: https://trellis3d.github.io/
Code: https://github.com/Microsoft/TRELLIS
|
shadohead/lora_model_csm_1b_frieren | shadohead | 2025-05-21T08:29:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"csm",
"trl",
"en",
"base_model:unsloth/csm-1b",
"base_model:finetune:unsloth/csm-1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-21T08:29:47Z | ---
base_model: unsloth/csm-1b
tags:
- text-generation-inference
- transformers
- unsloth
- csm
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shadohead
- **License:** apache-2.0
- **Finetuned from model :** unsloth/csm-1b
This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cluebbers/Mistral-7B-v0.1-adverserial-paraphrasing-sft | cluebbers | 2025-05-21T08:25:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-21T08:20:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
seoyeon316/gemma-3-1b-pt-MED | seoyeon316 | 2025-05-21T06:26:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-21T06:25:14Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rgn-la/rgn-rodg-lora-flux | rgn-la | 2025-05-21T06:22:35Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-21T06:02:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rodg
---
# Rgn Rodg Lora Flux
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rodg` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rodg",
"lora_weights": "https://huggingface.co/rgn-la/rgn-rodg-lora-flux/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('rgn-la/rgn-rodg-lora-flux', weight_name='lora.safetensors')
image = pipeline('rodg').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 20
## Contribute your own examples
You can use the [community tab](https://huggingface.co/rgn-la/rgn-rodg-lora-flux/discussions) to add images that show off what youโve made with this LoRA.
|
kisimManushya/finetuned_llama3_1 | kisimManushya | 2025-05-21T06:22:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-21T06:21:57Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
library_name: transformers
model_name: finetuned_llama3_1
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for finetuned_llama3_1
This model is a fine-tuned version of [unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kisimManushya/finetuned_llama3_1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf | RichardErkhov | 2025-05-21T04:57:32Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-21T03:24:44Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ark_rinvoq_claim_lora_3.1_12042024 - GGUF
- Model creator: https://huggingface.co/Inabia-AI/
- Original model: https://huggingface.co/Inabia-AI/ark_rinvoq_claim_lora_3.1_12042024/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [ark_rinvoq_claim_lora_3.1_12042024.Q2_K.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q2_K.gguf) | Q2_K | 2.96GB |
| [ark_rinvoq_claim_lora_3.1_12042024.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [ark_rinvoq_claim_lora_3.1_12042024.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [ark_rinvoq_claim_lora_3.1_12042024.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q3_K.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q3_K.gguf) | Q3_K | 3.74GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [ark_rinvoq_claim_lora_3.1_12042024.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q4_0.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q4_0.gguf) | Q4_0 | 4.34GB |
| [ark_rinvoq_claim_lora_3.1_12042024.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q4_K.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q4_K.gguf) | Q4_K | 4.58GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q4_1.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q4_1.gguf) | Q4_1 | 4.78GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q5_0.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q5_0.gguf) | Q5_0 | 5.21GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q5_K.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q5_K.gguf) | Q5_K | 5.34GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q5_1.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q5_1.gguf) | Q5_1 | 5.65GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q6_K.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q6_K.gguf) | Q6_K | 6.14GB |
| [ark_rinvoq_claim_lora_3.1_12042024.Q8_0.gguf](https://huggingface.co/RichardErkhov/Inabia-AI_-_ark_rinvoq_claim_lora_3.1_12042024-gguf/blob/main/ark_rinvoq_claim_lora_3.1_12042024.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stefanoscotta/gemma_multimodal_Segm_V2 | stefanoscotta | 2025-05-21T04:26:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-20T08:44:17Z | ---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma_multimodal_Segm_V2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma_multimodal_Segm_V2
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="stefanoscotta/gemma_multimodal_Segm_V2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/st-scotta/segm_multimodal/runs/yf3vj5u0)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.2.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
xingxm/LLM4SVG-GPT2XL-3B-Instruct-2401028 | xingxm | 2025-05-21T04:04:19Z | 0 | 0 | null | [
"safetensors",
"license:cc-by-nd-4.0",
"region:us"
]
| null | 2025-05-20T13:51:36Z | ---
license: cc-by-nd-4.0
---
|
MrDragonFox/baddy_S3_EXP_3 | MrDragonFox | 2025-05-21T03:22:25Z | 0 | 0 | null | [
"safetensors",
"llama",
"unsloth",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-05-21T01:45:26Z | ---
license: cc-by-nc-4.0
tags:
- unsloth
---
(m)orpheus t(i)t(t)s
- Uncensored Orpheus tts
finetune of orpheus on uncensored/ (un)alingned data to be able to generate more interresting sounds
SEASION3 - Experiment 3
speaker name is "baddy" - trained on base
prob. final checkpoint for the time beeing
seems to even work with voice cloneing fine if you keep the speaker as baddy
bug reports / recommendations please in the discord https://discord.gg/RUs3uzBdW3
training still under way
does less tags but generalise rather well |
ych1016/ppo-Huggy | ych1016 | 2025-05-20T12:28:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-20T12:28:42Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ych1016/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
LandCruiser/sn29_coldint_2005_1 | LandCruiser | 2025-05-20T06:25:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-20T04:43:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pastor-daughter-viral-videos/watc.pastor.daughter.viral.video | pastor-daughter-viral-videos | 2025-05-20T06:05:06Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-20T06:03:25Z | <a rel="nofollow" href="https://tinyurl.com/23vxfa2z"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> |
922-SY/Llama-3-dt1 | 922-SY | 2025-05-19T05:49:06Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-21T21:18:15Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** 922-SY
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bigrainlin/qwen-audio-bedtime | bigrainlin | 2025-05-19T05:35:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-19T05:31:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
himel06/DoctorHimel | himel06 | 2025-05-19T00:33:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-19T00:33:19Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** himel06
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CDHAI/roberta-cgm-mlm-hm-2022cgm-epoch2 | CDHAI | 2025-05-18T23:22:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-05-18T23:18:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hubble658/obss_llama | hubble658 | 2025-05-18T12:27:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-18T12:27:00Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hubble658
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_llm2_gen4_WXS_doc1000_synt120_rndgen_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-18T11:42:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-18T11:42:44Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/kannada-av-model-GGUF | mradermacher | 2025-05-18T09:32:00Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:yadavthrilok/kannada-av-model",
"base_model:quantized:yadavthrilok/kannada-av-model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-18T07:06:55Z | ---
base_model: yadavthrilok/kannada-av-model
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/yadavthrilok/kannada-av-model
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/kannada-av-model-GGUF/resolve/main/kannada-av-model.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ReadyArt/The-Omega-Directive-M-36B-v1.0 | ReadyArt | 2025-05-18T03:09:19Z | 17 | 2 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"text-generation",
"conversational",
"en",
"base_model:TheDrummer/Skyfall-36B-v2",
"base_model:finetune:TheDrummer/Skyfall-36B-v2",
"license:apache-2.0",
"region:us"
]
| text-generation | 2025-04-09T00:55:17Z | ---
license: apache-2.0
language:
- en
base_model:
- TheDrummer/Skyfall-36B-v2
base_model_relation: finetune
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">The-Omega-Directive-M-36B-v1.0</h1>
<p class="subtitle">Where Forbidden Knowledge Meets Unparalleled Immersion</p>
</div>
<div class="waifu-container">
<img src="https://i.imghippo.com/files/EBq6162wlk.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">โก Quantum Leap Forward</h2>
<p>This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion:</p>
<ul>
<li>๐งฌ <strong>Expanded 22M Token Dataset</strong> - Incorporating 90 erotic novels and 6,496 kink scenarios</li>
<li>โก <strong>Optimized Architecture</strong> - Smoother training curve yields more intelligent outputs</li>
<li>๐ <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies</li>
<li>๐ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>๐น <strong>Unexpected Depth</strong> - Occasionally surprises with profound insights amidst the debauchery</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/The-Omega-Directive-M-36B-v1.0-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/The-Omega-Directive-M-36B-v1.0-i1-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL2</h3>
<a href="https://huggingface.co/collections/ReadyArt/the-omega-directive-m-36b-v10-exl2-67f5c58c8f60757d94e7d012" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL3</h3>
<a href="https://huggingface.co/collections/ReadyArt/the-omega-directive-m-36b-v10-exl3-68294ef2c8a4b019401954e2" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">โ ๏ธ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model maintains Forgotten-Safeword's complete lack of alignment:</p>
<ul>
<li>๐ซ No moral or ethical constraints on generated content</li>
<li>๐ Will enthusiastically engage with any NSFW scenario</li>
<li>๐ May generate content that requires industrial-grade brain bleach</li>
<li>โ๏ธ Perfectly balanced... as all things should be</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Notes</h2>
<ul>
<li>๐ฅ Maintains signature intensity with improved narrative flow</li>
<li>๐ Handles multi-character scenarios with improved consistency</li>
<li>๐ง Excels at long-form storytelling without losing track of plot threads</li>
<li>โก Noticeably better at following complex instructions than previous versions</li>
<li>๐ญ Responds to subtle prompt nuances like a mind reader</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐งโ๐ฌ Model Authors</h2>
<ul>
<li>TheDrummer (Base Model Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>Artus (EXL2 Weights Weaver)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">โ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull's Kofi</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
Tonyzp/ppo-LunarLander-v2 | Tonyzp | 2025-05-17T14:02:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-17T14:01:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.95 +/- 23.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tianweiy/CausVid | tianweiy | 2025-05-17T07:30:02Z | 0 | 17 | diffusers | [
"diffusers",
"text-to-video",
"diffusion distillation",
"arxiv:2412.07772",
"license:cc-by-nc-4.0",
"region:us"
]
| text-to-video | 2025-03-11T16:23:56Z | ---
license: cc-by-nc-4.0
library_name: diffusers
tags:
- text-to-video
- diffusion distillation
---
# CausVid Model Card

> [**From Slow Bidirectional to Fast Autoregressive Video Diffusion Models**](https://arxiv.org/abs/2412.07772),
> Tianwei Yin*, Qiang Zhang*, Richard Zhang, William T. Freeman, Frรฉdo Durand, Eli Shechtman, Xun Huang (* equal contribution)
## Environment Setup
```bash
git clone https://github.com/tianweiy/CausVid && cd CausVid
conda create -n causvid python=3.10 -y
conda activate causvid
pip install torch torchvision
pip install -r requirements.txt
python setup.py develop
```
Also download the Wan base models from [here](https://github.com/Wan-Video/Wan2.1) and save it to wan_models/Wan2.1-T2V-1.3B/
## Inference Example
First download the checkpoints: [Autoregressive Model](https://huggingface.co/tianweiy/CausVid/tree/main/autoregressive_checkpoint), [Bidirectional Model 1](https://huggingface.co/tianweiy/CausVid/tree/main/bidirectional_checkpoint1) or [Bidirectional Model 2](https://huggingface.co/tianweiy/CausVid/tree/main/bidirectional_checkpoint2) (performs slightly better).
### Autoregressive 3-step 5-second Video Generation
```bash
python minimal_inference/autoregressive_inference.py --config_path configs/wan_causal_dmd.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX
```
### Autoregressive 3-step long Video Generation
```bash
python minimal_inference/longvideo_autoregressive_inference.py --config_path configs/wan_causal_dmd.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX --num_rollout XXX
```
### Bidirectional 3-step 5-second Video Generation
```bash
python minimal_inference/bidirectional_inference.py --config_path configs/wan_bidirectional_dmd_from_scratch.yaml --checkpoint_folder XXX --output_folder XXX --prompt_file_path XXX
```
For more information, please refer to the [code repository](https://github.com/tianweiy/DMD2)
## License
CausVid is released under [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
## Citation
If you find CausVid useful or relevant to your research, please kindly cite our papers:
```bib
@inproceedings{yin2025causvid,
title={From Slow Bidirectional to Fast Autoregressive Video Diffusion Models},
author={Yin, Tianwei and Zhang, Qiang and Zhang, Richard and Freeman, William T and Durand, Fredo and Shechtman, Eli and Huang, Xun},
booktitle={CVPR},
year={2025}
}
@inproceedings{yin2024improved,
title={Improved Distribution Matching Distillation for Fast Image Synthesis},
author={Yin, Tianwei and Gharbi, Micha{\"e}l and Park, Taesung and Zhang, Richard and Shechtman, Eli and Durand, Fredo and Freeman, William T},
booktitle={NeurIPS},
year={2024}
}
@inproceedings{yin2024onestep,
title={One-step Diffusion with Distribution Matching Distillation},
author={Yin, Tianwei and Gharbi, Micha{\"e}l and Zhang, Richard and Shechtman, Eli and Durand, Fr{\'e}do and Freeman, William T and Park, Taesung},
booktitle={CVPR},
year={2024}
}
```
|
Paro-Aarti-Videoa/Paro.Aarti.Viral.Video.Original.Full.HD.Btswiki | Paro-Aarti-Videoa | 2025-05-17T06:40:11Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-17T06:37:24Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=Paro-Aarti)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=Paro-Aarti)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Paro-Aarti) |
nerfbaselines/nerfbaselines | nerfbaselines | 2025-05-16T18:04:41Z | 0 | 1 | null | [
"arxiv:2406.17345",
"license:mit",
"region:us"
]
| null | 2024-02-03T18:06:40Z | ---
license: mit
tags:
- arxiv:2406.17345
--- |
manifestasi/smolVLM-161M-q4-manifestasi | manifestasi | 2025-05-16T11:41:19Z | 0 | 0 | null | [
"safetensors",
"idefics3",
"image-text-to-text",
"conversational",
"en",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| image-text-to-text | 2025-05-16T11:05:51Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
---
# This Model is for Educational Research Purpose Only.
# Sample Code
```
%%capture
!pip install -U bitsandbytes
from transformers import AutoProcessor, AutoModelForVision2Seq
import torch
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained("manifestasi/smolVLM-161M-q4-manifestasi")
model = AutoModelForVision2Seq.from_pretrained("manifestasi/smolVLM-161M-q4-manifestasi",
torch_dtype=torch.float16,
_attn_implementation="eager").to(DEVICE)
from PIL import Image
from transformers.image_utils import load_image
# Load images
# image1 = load_image("https://huggingface.co/spaces/HuggingFaceTB/SmolVLM/resolve/main/example_images/rococo.jpg")
image2 = load_image("/kaggle/input/bandaraaa/799269_1200.jpg")
# Create input messages
messages = [
{
"role": "user",
"content": [
# {"type": "image"},
{"type": "image"},
{"type": "text",
"text": """
Instructions :
you are visual assistant for blind people, please answer politely and short
under 100 words.
Prompt :
can you direct me to find toilet
"""}
]
},
]
# Prepare inputs
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
# inputs = processor(text=prompt, return_tensors="pt")
inputs = processor(text=prompt, images=[image2], return_tensors="pt")
inputs = inputs.to(DEVICE)
# Generate outputs
from time import time
tim1 = time()
generated_ids = model.generate(**inputs, max_new_tokens=120)
generated_texts = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
)
tim2 = time()
print(f"{(tim2 - tim1)} detik")
print(generated_texts[0].split("Assistant: ")[1])
``` |
MinaMila/phi3_unlearned_lr1e-6_w0.75_0.75_0.75_epoch1 | MinaMila | 2025-05-15T22:14:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-15T22:12:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pastors-daughter/wATCH.pastors-daughter-Viral-pastors-daughter.original | pastors-daughter | 2025-05-15T10:55:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-15T10:55:40Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
momiskeso/sdvsdfv | momiskeso | 2025-05-15T05:54:01Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
]
| null | 2025-05-15T05:54:01Z | ---
license: bigcode-openrail-m
---
|
randa88888/qwen_Rlhf3 | randa88888 | 2025-05-14T12:05:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-14T12:05:35Z | ---
base_model: unsloth/qwen2.5-14b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** randa88888
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
joboffer/9b0061c8-85db-41d6-97f6-e51e6b020c4a | joboffer | 2025-05-13T21:52:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-13T21:00:59Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9b0061c8-85db-41d6-97f6-e51e6b020c4a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 4be1053418601092_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: en
field_output: fr
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: joboffer/9b0061c8-85db-41d6-97f6-e51e6b020c4a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/4be1053418601092_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2cf90169-71db-424d-823c-882a932bfe86
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 2cf90169-71db-424d-823c-882a932bfe86
warmup_steps: 20
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# 9b0061c8-85db-41d6-97f6-e51e6b020c4a
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0161 | 0.0067 | 400 | 2.0409 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Asit03/DeepSeek-LLM-7B-Chat-v1-12May-full-16bit-v2 | Asit03 | 2025-05-12T09:30:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Asit03/DeepSeek-LLM-7B-Chat-full-16bit",
"base_model:finetune:Asit03/DeepSeek-LLM-7B-Chat-full-16bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-12T09:26:16Z | ---
base_model: Asit03/DeepSeek-LLM-7B-Chat-full-16bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Asit03
- **License:** apache-2.0
- **Finetuned from model :** Asit03/DeepSeek-LLM-7B-Chat-full-16bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits