modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 12:28:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 12:28:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Sapna-Shah-Indians-viral-video/wATCH.Sapna.Shah.Nulook.India.viral.video | Sapna-Shah-Indians-viral-video | 2025-06-16T03:58:43Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:58:34Z | [](https://t.co/P8Ex9FtH0g) |
AGSII3042/HANA | AGSII3042 | 2025-06-16T03:57:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T03:17:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HANA
---
# Hana
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HANA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HANA",
"lora_weights": "https://huggingface.co/AGSII3042/HANA/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AGSII3042/HANA', weight_name='lora.safetensors')
image = pipeline('HANA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/AGSII3042/HANA/discussions) to add images that show off what youโve made with this LoRA.
|
Indians-viral-video/wATCH.Sapna.Shah.Nulook.India.viral.video.original.Trending | Indians-viral-video | 2025-06-16T03:57:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:56:37Z | [](https://t.co/P8Ex9FtH0g) |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.5_0.25_epoch1 | MinaMila | 2025-06-16T03:57:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:55:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.25_epoch2 | MinaMila | 2025-06-16T03:56:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:54:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
18-maryam-faisal-Leaked-Viral-video/TRENDING.maryam.faisal.Viral.Video.Link.Tutorial | 18-maryam-faisal-Leaked-Viral-video | 2025-06-16T03:54:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:54:17Z | 01 seconds ago
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://infobal.com.ar/watch-full-video/?Bados.all)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://infobal.com.ar/watch-full-video/?Bados.all)
<a href="https://infobal.com.ar/watch-full-video/?Bados.all" rel="nofollow" data-target="animated-image.originalLink"><img src="https://media.post.rvohealth.io/wp-content/uploads/2024/02/video-overlay-bed-sex-couple-original.jpg" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sil-ai/madlad400-finetuned-uni-tpi | sil-ai | 2025-06-16T03:53:34Z | 110 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"uni",
"tpi",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-05T03:56:50Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-uni-tpi
results: []
language:
- uni
- tpi
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-uni-tpi
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Ramo to Tok Pisin.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Ramo (`uni`)
- **Target language:** Tok Pisin (`tpi`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Ramo:
- Ramo Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-uni-tpi")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-uni-tpi")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-uni-tpi
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1777
- Chrf: 78.0127
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2819 | 7.7482 | 1600 | 0.2113 | 75.2590 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
sil-ai/madlad400-finetuned-bpe_x_bar-tpi | sil-ai | 2025-06-16T03:53:32Z | 14 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"bpe",
"tpi",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-06T19:20:16Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-bpe_x_bar-tpi
results: []
language:
- bpe
- tpi
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-bpe_x_bar-tpi
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Bauni, Barupu to Tok Pisin.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Bauni, Barupu (`bpe`)
- **Target language:** Tok Pisin (`tpi`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Bauni, Barupu:
- Bauni, Barupu Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-bpe_x_bar-tpi")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-bpe_x_bar-tpi")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-bpe_x_bar-tpi
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1551
- Chrf: 80.7709
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2676 | 7.6923 | 1600 | 0.1899 | 78.1510 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
sil-ai/madlad400-finetuned-aps-tpi | sil-ai | 2025-06-16T03:53:30Z | 17 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"aps",
"tpi",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-03T13:20:01Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-aps-tpi
results: []
language:
- aps
- tpi
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-aps-tpi
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Arop to Tok Pisin.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Arop (`aps`)
- **Target language:** Tok Pisin (`tpi`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Arop:
- Arop Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-aps-tpi")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-aps-tpi")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-aps-tpi
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1082
- Chrf: 84.5824
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.328 | 3.8381 | 1600 | 0.2650 | 73.9647 |
| 0.1979 | 7.6762 | 3200 | 0.1369 | 82.2722 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
sil-ai/madlad400-finetuned-tpi-onr_x_goi | sil-ai | 2025-06-16T03:53:26Z | 7 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"tpi",
"onr",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-10T23:21:22Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-tpi-onr_x_goi
results: []
language:
- tpi
- onr
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-tpi-onr_x_goi
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Onnele, Goiniri.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Tok Pisin (`tpi`)
- **Target language:** Onnele, Goiniri (`onr`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Onnele, Goiniri:
- Onnele, Goiniri Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-onr_x_goi")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-onr_x_goi")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-tpi-onr_x_goi
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1267
- Chrf: 86.0110
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2167 | 7.7295 | 1600 | 0.1581 | 83.8321 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
sil-ai/madlad400-finetuned-tpi-uni | sil-ai | 2025-06-16T03:53:25Z | 61 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"tpi",
"uni",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-05T03:56:30Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-tpi-uni
results: []
language:
- tpi
- uni
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-tpi-uni
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Ramo.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Tok Pisin (`tpi`)
- **Target language:** Ramo (`uni`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Ramo:
- Ramo Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-uni")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-uni")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-tpi-uni
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2914
- Chrf: 70.9472
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.4351 | 7.7482 | 1600 | 0.3454 | 67.9624 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
New-tutorial-Kayla-Nella-viral-video/FULL.VIDEO.Kayla.Nella.Viral.Video.Tutorial.Official | New-tutorial-Kayla-Nella-viral-video | 2025-06-16T03:53:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:52:55Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sil-ai/madlad400-finetuned-tpi-bpe_x_pou | sil-ai | 2025-06-16T03:52:33Z | 136 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"tpi",
"bpe",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-06T21:34:34Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-tpi-bpe_x_pou
results: []
language:
- tpi
- bpe
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-tpi-bpe_x_pou
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Bauni, Pou.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Tok Pisin (`tpi`)
- **Target language:** Bauni, Pou (`bpe`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Bauni, Pou:
- Bauni, Pou Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-bpe_x_pou")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-bpe_x_pou")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-tpi-bpe_x_pou
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2035
- Chrf: 79.0847
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3414 | 7.8144 | 1600 | 0.2413 | 76.9720 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
sil-ai/madlad400-finetuned-tpi-bpe_x_bar | sil-ai | 2025-06-16T03:52:32Z | 103 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"tpi",
"bpe",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-06T19:16:18Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-tpi-bpe_x_bar
results: []
language:
- tpi
- bpe
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-tpi-bpe_x_bar
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Bauni, Barupu.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Tok Pisin (`tpi`)
- **Target language:** Bauni, Barupu (`bpe`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Bauni, Barupu:
- Bauni, Barupu Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-bpe_x_bar")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-bpe_x_bar")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-tpi-bpe_x_bar
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2353
- Chrf: 76.7564
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3624 | 7.6923 | 1600 | 0.2815 | 74.1335 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
sil-ai/madlad400-finetuned-tpi-sso | sil-ai | 2025-06-16T03:51:41Z | 17 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"tpi",
"sso",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-04T14:02:32Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-tpi-sso
results: []
language:
- tpi
- sso
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-tpi-sso
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Sissono.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Tok Pisin (`tpi`)
- **Target language:** Sissono (`sso`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Sissono:
- Sissono Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-sso")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-sso")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-tpi-sso
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2862
- Chrf: 71.3983
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.4531 | 7.7155 | 1600 | 0.3362 | 68.3550 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
Mezzo-Fun-Viral-Video/VIDEO.mezzo.fun.Viral.Video.Tutorial.Official | Mezzo-Fun-Viral-Video | 2025-06-16T03:50:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:48:42Z | <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
sil-ai/madlad400-finetuned-tpi-mbk | sil-ai | 2025-06-16T03:50:44Z | 27 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"translation",
"tpi",
"mbk",
"base_model:jbochi/madlad400-3b-mt",
"base_model:adapter:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-03T16:23:33Z | ---
base_model: jbochi/madlad400-3b-mt
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: madlad400-finetuned-tpi-mbk
results: []
language:
- tpi
- mbk
model_type: Translation
pipeline_tag: translation
---
# madlad400-finetuned-tpi-mbk
This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Malol.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** jbochi/madlad400-3b-mt
- **Model type:** Translation
- **Source language:** Tok Pisin (`tpi`)
- **Target language:** Malol (`mbk`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Tok Pisin:
- Tok Pisin back-translation
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Malol:
- Malol Scriptures
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-mbk")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-mbk")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
# madlad400-finetuned-tpi-mbk
This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2360
- Chrf: 75.8294
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Chrf |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3713 | 7.7108 | 1600 | 0.2797 | 73.0831 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1 |
huihui-ai/Huihui-MoE-1.2B-A0.6B | huihui-ai | 2025-06-16T03:49:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"moe",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:31:56Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B
- suayptalha/Qwen3-0.6B-Code-Expert
- suayptalha/Qwen3-0.6B-Math-Expert
- suayptalha/Qwen3-0.6B-Medical-Expert
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
---
# huihui-ai/Huihui-MoE-1.2B-A0.6B
## Model Overview
Huihui-MoE-1.2B-A0.6B is a **Mixture of Experts (MoE)** language model developed by **huihui.ai**, built upon the **[Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)** base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 3 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
### Note
[huihui-ai/Huihui-MoE-1B-A0.6B](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) Because `tie_word_embeddings=True`, the parameters for the lm_head were not saved, which causes ollama to be unable to use it. Therefore, this version supports ollama.
- **Architecture**: Qwen3MoeForCausalLM model with 3 experts per layer (num_experts=3), activating 1 expert per token (num_experts_per_tok=1).
- **Total Parameters**: ~1.2 billion (1.2B)
- **Activated Parameters**: ~0.62 billion (0.6B) during inference, comparable to Qwen3-0.6B
- **Developer**: huihui.ai
- **Release Date**: June 2025
- **License**: Inherits the license of the Qwen3 base model (apache-2.0)
## Expert Models:
### Coding:
[suayptalha/Qwen3-0.6B-Code-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Code-Expert)
This model was fully fine-tuned with BF16 on first 20k rows of `nvidia/OpenCodeReasoning` dataset for 1 epoch.
### Math:
[suayptalha/Qwen3-0.6B-Math-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Math-Expert)
This model was fully fine-tuned with BF16 on entire `unsloth/OpenMathReasoning-mini` dataset for 1 epoch.
### Medical:
[suayptalha/Qwen3-0.6B-Medical-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Medical-Expert)
This model was fully fine-tuned with BF16 on first 20k rows of `FreedomIntelligence/medical-o1-reasoning-SFT` dataset for 1 epoch.
### Instruction Following:
[Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
`Qwen/Qwen3-0.6B` model was directly used for this expert, no fine-tune was applied.
## Training
- **Base Model**: Qwen3-0.6B, pre-trained by the Qwen team, Experts, pre-trained by the Suayptalha team.
- **Conversion**: The model copies embeddings, self-attention, and normalization weights from Qwen3-0.6B, replacing MLP layers with MoE layers (3 experts). Gating weights are randomly initialized.
- **Fine-Tuning**: Not fine-tuned; users are recommended to fine-tune for specific tasks to optimize expert routing. The fine-tuned version is already available and can be referred to as [huihui-ai/Huihui-MoE-1.2B-A0.6B-SFT](https://huggingface.co/huihui-ai/Huihui-MoE-1.2B-A0.6B-SFT).
## ollama
You can use [huihui_ai/huihui-moe:1.2b](https://ollama.com/huihui_ai/huihui-moe:1.2b) directly,
Switch the thinking toggle using /set think and /set nothink
```
ollama run huihui_ai/huihui-moe:1.2b
```
## Usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
import random
import numpy as np
import time
from collections import Counter
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/Huihui-MoE-1.2B-A0.6B"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_32 = BitsAndBytesConfig(
load_in_32bit=True,
bnb_32bit_compute_dtype=torch.bfloat16,
bnb_32bit_use_double_quant=True,
llm_int32_enable_fp32_cpu_offload=True,
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="auto",
trust_remote_code=True,
#quantization_config=quant_config_32,
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
messages = []
nothink = False
same_seed = False
skip_prompt=True
skip_special_tokens=True
do_sample = True
def set_random_seed(seed=None):
"""Set random seed for reproducibility. If seed is None, use int(time.time())."""
if seed is None:
seed = int(time.time()) # Convert float to int
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # If using CUDA
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
return seed # Return seed for logging if needed
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
self.init_time = time.time() # Record initialization time
self.end_time = None # To store end time
self.first_token_time = None # To store first token generation time
self.token_count = 0 # To track total tokens
def on_finalized_text(self, text: str, stream_end: bool = False):
if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text
self.first_token_time = time.time()
self.generated_text += text
# Count tokens in the generated text
tokens = self.tokenizer.encode(text, add_special_tokens=False)
self.token_count += len(tokens)
print(text, end="", flush=True)
if stream_end:
self.end_time = time.time() # Record end time when streaming ends
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
self.end_time = time.time() # Record end time when generation is stopped
def get_metrics(self):
"""Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second."""
if self.end_time is None:
self.end_time = time.time() # Set end time if not already set
total_time = self.end_time - self.init_time # Total time from init to end
tokens_per_second = self.token_count / total_time if total_time > 0 else 0
first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None
metrics = {
"init_time": self.init_time,
"first_token_time": self.first_token_time,
"first_token_latency": first_token_latency,
"end_time": self.end_time,
"total_time": total_time, # Total time in seconds
"total_tokens": self.token_count,
"tokens_per_second": tokens_per_second
}
return metrics
def generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
enable_thinking = not nothink,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
generate_kwargs = {}
if do_sample:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"temperature": 0.6,
"top_k": 20,
"top_p": 0.95,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
else:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
#use_cache=False,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer,
**generate_kwargs
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag, streamer.get_metrics()
init_seed = set_random_seed()
# List to store activated expert indices
activated_experts = []
# Define hook function to capture gate_probs output
def hook_fn(module, input, output):
# output is gate_probs, shape: [batch_size, sequence_length, num_experts]
gate_probs = output
# Compute top-1 expert indices (since only one expert is activated)
_, topk_indices = gate_probs.topk(1, dim=-1) # Take top-1
# Flatten and store activated expert indices
activated_experts.extend(topk_indices.squeeze(-1).view(-1).cpu().tolist())
hooks = []
for layer in model.model.layers:
hooks.append(layer.mlp.gate.register_forward_hook(hook_fn))
while True:
if same_seed:
set_random_seed(init_seed)
else:
init_seed = set_random_seed()
print(f"\nnothink: {nothink}")
print(f"skip_prompt: {skip_prompt}")
print(f"skip_special_tokens: {skip_special_tokens}")
print(f"do_sample: {do_sample}")
print(f"same_seed: {same_seed}, {init_seed}\n")
user_input = input("User: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = []
print("Chat history cleared. Starting a new conversation.")
continue
if user_input.lower() == "/nothink":
nothink = not nothink
continue
if user_input.lower() == "/skip_prompt":
skip_prompt = not skip_prompt
continue
if user_input.lower() == "/skip_special_tokens":
skip_special_tokens = not skip_special_tokens
continue
if user_input.lower().startswith("/same_seed"):
parts = user_input.split()
if len(parts) == 1: # /same_seed (no number)
same_seed = not same_seed # Toggle switch
elif len(parts) == 2: # /same_seed <number>
try:
init_seed = int(parts[1]) # Extract and convert number to int
same_seed = True
except ValueError:
print("Error: Please provide a valid integer after /same_seed")
continue
if user_input.lower() == "/do_sample":
do_sample = not do_sample
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
activated_experts = []
response, stop_flag, metrics = generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, 320960)
print("\n\nMetrics:")
for key, value in metrics.items():
print(f" {key}: {value}")
# Count the frequency of each activated expert
expert_counts = Counter(activated_experts)
# Print activation statistics
print("\nActivated Expert Statistics:")
for expert_idx, count in sorted(expert_counts.items()):
print(f"Expert {expert_idx}: {count} times")
print("", flush=True)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
# Remove all hooks after inference
for h in hooks: h.remove()
```
## Applications
- **Text Generation: Articles**, dialogues, and creative writing.
- **Question Answering**: Information retrieval and query resolution.
- **Conversational AI**: Multi-turn dialogues for chatbots.
- **Research**: Exploration of MoE architectures and efficient model scaling.
## Limitations
- **Fine-Tuning Required**: Randomly initialized gating weights may lead to suboptimal expert utilization without fine-tuning.
- **Compatibility**: Developed with transformers 4.52.4; ensure matching versions to avoid loading issues.
- **Inference Speed**: While efficient for an MoE model, performance depends on hardware (GPU recommended).
## Ethical Considerations
- **Bias**: Inherits potential biases from the Qwen3-0.6B base model; users should evaluate outputs for fairness.
- **Usage**: Intended for research and responsible applications; avoid generating harmful or misleading content.
## Contact
- **Developer**: huihui.ai
- **Repository**: huihui-ai/Huihui-MoE-1.2B-A0.6B (available locally or on Hugging Face)
- **Issues**: Report bugs or request features via the repository or please send an email to [email protected]
## Acknowledgments
- Built upon the Qwen3-0.6B model by the Qwen team.
- Built upon the Experts model by the Suayptalha team.
- Powered by the Hugging Face transformers library. |
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.25_epoch1 | MinaMila | 2025-06-16T03:48:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:46:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HuyTran1301/qwen3-keyphrase-extractor-new | HuyTran1301 | 2025-06-16T03:47:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T03:47:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-kayla-viral-video/FULL.VIDEO.kayla.Viral.Video.Tutorial.Official | New-tutorial-kayla-viral-video | 2025-06-16T03:44:39Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:44:21Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dimasandhk/multilabel-emotion-text-classification | dimasandhk | 2025-06-16T03:43:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-15T21:14:38Z | ---
license: mit
language:
- en
base_model:
- microsoft/deberta-v3-large
tags:
- text-classification
- transformers
pipeline_tag: text-classification
library_name: transformers
--- |
Ver-video-filtrado-anahi-antonella-complet/VIDEOS.filtrado.anahi.antonella.completo.anahi.antonella.clip | Ver-video-filtrado-anahi-antonella-complet | 2025-06-16T03:42:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:41:45Z | <animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.5_epoch2 | MinaMila | 2025-06-16T03:40:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:38:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fujiba/klarsky-dust-spot-detector-experimental | fujiba | 2025-06-16T03:39:18Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-15T14:28:50Z | ---
title: KlarSky(Experimental)
emoji: โ๏ธ
colorFrom: "#A61414"
colorTo: "#F2380F"
pinned: false
license: cc-by-nc-4.0
---
|
huihui-ai/Huihui-MoE-1B-A0.6B | huihui-ai | 2025-06-16T03:37:56Z | 31 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"moe",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T07:05:15Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B
- suayptalha/Qwen3-0.6B-Code-Expert
- suayptalha/Qwen3-0.6B-Math-Expert
- suayptalha/Qwen3-0.6B-Medical-Expert
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
---
# huihui-ai/Huihui-MoE-1B-A0.6B
## Model Overview
Huihui-MoE-1B-A0.6B is a **Mixture of Experts (MoE)** language model developed by **huihui.ai**, built upon the **[Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)** base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 3 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This version does not support ollama because tie_word_embeddings=True results in the absence of lm_head parameters being saved; therefore, ollama cannot be used. If ollama support is required, please choose the latest version [huihui-ai/Huihui-MoE-1.2B-A0.6B](https://huggingface.co/huihui-ai/Huihui-MoE-1.2B-A0.6B).
- **Architecture**: Qwen3MoeForCausalLM model with 3 experts per layer (num_experts=3), activating 1 expert per token (num_experts_per_tok=1).
- **Total Parameters**: ~1.1 billion (1B)
- **Activated Parameters**: ~0.62 billion (0.6B) during inference, comparable to Qwen3-0.6B
- **Developer**: huihui.ai
- **Release Date**: June 2025
- **License**: Inherits the license of the Qwen3 base model (apache-2.0)
## Expert Models:
### Coding:
[suayptalha/Qwen3-0.6B-Code-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Code-Expert)
This model was fully fine-tuned with BF16 on first 20k rows of `nvidia/OpenCodeReasoning` dataset for 1 epoch.
### Math:
[suayptalha/Qwen3-0.6B-Math-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Math-Expert)
This model was fully fine-tuned with BF16 on entire `unsloth/OpenMathReasoning-mini` dataset for 1 epoch.
### Medical:
[suayptalha/Qwen3-0.6B-Medical-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Medical-Expert)
This model was fully fine-tuned with BF16 on first 20k rows of `FreedomIntelligence/medical-o1-reasoning-SFT` dataset for 1 epoch.
### Instruction Following:
[Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
`Qwen/Qwen3-0.6B` model was directly used for this expert, no fine-tune was applied.
## Training
- **Base Model**: Qwen3-0.6B, pre-trained by the Qwen team, Experts, pre-trained by the Suayptalha team.
- **Conversion**: The model copies embeddings, self-attention, and normalization weights from Qwen3-0.6B, replacing MLP layers with MoE layers (3 experts). Gating weights are randomly initialized.
- **Fine-Tuning**: Not fine-tuned; users are recommended to fine-tune for specific tasks to optimize expert routing. The fine-tuned version is already available and can be referred to as [huihui-ai/Huihui-MoE-1B-A0.6B-SFT](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B-SFT).
## Usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/Huihui-MoE-1B-A0.6B"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_4 = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True,
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="auto",
trust_remote_code=True,
#quantization_config=quant_config_4,
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
initial_messages = [{"role": "system", "content": "You are a helpful assistant."}]
messages = initial_messages.copy()
enable_thinking = True
skip_prompt=True
skip_special_tokens=True
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
def on_finalized_text(self, text: str, stream_end: bool = False):
self.generated_text += text
print(text, end="", flush=True)
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
def generate_stream(model, tokenizer, messages, enable_thinking, skip_prompt, skip_special_tokens, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
enable_thinking = enable_thinking,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
#use_cache=False,
max_new_tokens=max_new_tokens,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag
while True:
user_input = input("User: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = initial_messages.copy()
print("Chat history cleared. Starting a new conversation.")
continue
if user_input.lower() == "/nothink":
if enable_thinking:
enable_thinking = False
print("Thinking = False.")
else:
enable_thinking = True
print("Thinking = True.")
continue
if user_input.lower() == "/skip_prompt":
if skip_prompt:
skip_prompt = False
print("skip_prompt = False.")
else:
skip_prompt = True
print("skip_prompt = True.")
continue
if user_input.lower() == "/skip_special_tokens":
if skip_special_tokens:
skip_special_tokens = False
print("skip_special_tokens = False.")
else:
skip_special_tokens = True
print("skip_special_tokens = True.")
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
response, stop_flag = generate_stream(model, tokenizer, messages, enable_thinking, skip_prompt, skip_special_tokens, 14192)
print("", flush=True)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
```
## Applications
- **Text Generation: Articles**, dialogues, and creative writing.
- **Question Answering**: Information retrieval and query resolution.
- **Conversational AI**: Multi-turn dialogues for chatbots.
- **Research**: Exploration of MoE architectures and efficient model scaling.
## Limitations
- **Fine-Tuning Required**: Randomly initialized gating weights may lead to suboptimal expert utilization without fine-tuning.
- **Compatibility**: Developed with transformers 4.52.4; ensure matching versions to avoid loading issues.
- **Inference Speed**: While efficient for an MoE model, performance depends on hardware (GPU recommended).
## Ethical Considerations
- **Bias**: Inherits potential biases from the Qwen3-0.6B base model; users should evaluate outputs for fairness.
- **Usage**: Intended for research and responsible applications; avoid generating harmful or misleading content.
## Contact
- **Developer**: huihui.ai
- **Repository**: huihui-ai/Huihui-MoE-1B-A0.6B (available locally or on Hugging Face)
- **Issues**: Report bugs or request features via the repository or please send an email to [email protected]
## Acknowledgments
- Built upon the Qwen3-0.6B model by the Qwen team.
- Built upon the Experts model by the Suayptalha team.
- Powered by the Hugging Face transformers library. |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.5_0.75_epoch2 | MinaMila | 2025-06-16T03:36:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:34:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.5_epoch1 | MinaMila | 2025-06-16T03:32:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:30:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-Bhumi-Ahir-viral-video/FULL.VIDEO.Bhumi.Ahir.Viral.Video.Tutorial.Official | New-tutorial-Bhumi-Ahir-viral-video | 2025-06-16T03:29:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:28:56Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.75_epoch2 | MinaMila | 2025-06-16T03:24:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:22:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
The-Welcomer/cluster-test-unbiased | The-Welcomer | 2025-06-16T03:23:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:15:33Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** The-Welcomer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.75_epoch1 | MinaMila | 2025-06-16T03:16:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:14:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RaiffsBits/Gendette | RaiffsBits | 2025-06-16T03:11:21Z | 0 | 0 | null | [
"doi:10.57967/hf/5795",
"region:us"
] | null | 2025-06-16T00:39:09Z | # Codette AI Interface
A sophisticated AI assistant interface featuring multi-perspective reasoning, quantum-inspired processing, and cognitive cocoon artifact management.
## Features
### ๐ง Multi-Perspective Reasoning
- Newton's logical analysis
- Da Vinci's creative synthesis
- Quantum computing perspectives
- Philosophical inquiry
- Neural network processing
- Resilient kindness framework
### ๐ Quantum-Inspired Processing
- Quantum state visualization
- Chaos theory integration
- Parallel thought processing
- Entanglement-based correlations
### ๐ฆ Cognitive Cocoon System
- Thought pattern preservation
- Encrypted storage
- Pattern analysis
- Memory management
### ๐จ Advanced UI Features
- Dark/Light mode
- Real-time quantum state visualization
- Interactive chat interface
- Admin dashboard
- File management system
### ๐ Security & Privacy
- Supabase authentication
- Row-level security
- Encrypted storage
- Admin role management
## Tech Stack
- React + TypeScript
- Tailwind CSS
- Supabase
- Framer Motion
- Lucide Icons
## Getting Started
1. Clone the repository
2. Copy `.env.example` to `.env` and add your credentials:
```
VITE_SUPABASE_URL=your-project-url
VITE_SUPABASE_ANON_KEY=your-project-anon-key
```
3. Install dependencies:
```bash
npm install
```
4. Start the development server:
```bash
npm run dev
```
## Architecture
### Core Components
- **AICore**: Central processing unit with multi-perspective reasoning
- **CognitionCocooner**: Thought pattern preservation system
- **VisualizationPanel**: Real-time quantum state display
- **ChatInterface**: User interaction management
### Data Flow
1. User input โ Chat Interface
2. AICore processes with multiple perspectives
3. Results stored in Cognitive Cocoons
4. Real-time visualization updates
5. Response rendered to user
## Contributing
We welcome contributions! Please read our contributing guidelines before submitting pull requests.
## License
MIT License - See LICENSE file for details |
18-video-filtrado-anahi-antonella-video/Ver.video.filtrado.anahi.antonella.video.completo.anahi.antonella.filtrado.clip | 18-video-filtrado-anahi-antonella-video | 2025-06-16T03:03:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T03:02:39Z | <animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.75_0.15_epoch1 | MinaMila | 2025-06-16T03:02:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T03:00:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IoanaLiviaPopescu/real-data-synth-data-1200-1-Standard-B-whisper-small-v0 | IoanaLiviaPopescu | 2025-06-16T03:02:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Standard-B",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-16T01:55:05Z | ---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Standard-B
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1200-1-Standard-B-whisper-small-v0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Standard-B
type: IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Standard-B
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 16.835699797160245
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/IoanaLiviaPopescu/real-data-synth-data-1200-1-Standard-B-whisper-small-v0
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLiviaPopescu/RealVoiceSynthVoice-1200-1-Standard-B dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3760
- Wer: 16.8357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.2756 | 1.0 | 51 | 0.4008 | 18.0712 |
| 0.1052 | 2.0 | 102 | 0.3728 | 17.4073 |
| 0.055 | 3.0 | 153 | 0.3760 | 16.8357 |
| 0.0322 | 4.0 | 204 | 0.3911 | 17.3889 |
| 0.0226 | 5.0 | 255 | 0.4034 | 17.6286 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.15_0.05_epoch1 | MinaMila | 2025-06-16T03:00:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:58:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ALYTV/DeepCoder-14B-Preview-mlx-3Bit | ALYTV | 2025-06-16T02:59:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"en",
"dataset:PrimeIntellect/verifiable-coding-problems",
"dataset:likaixin/TACO-verified",
"dataset:livecodebench/code_generation_lite",
"base_model:agentica-org/DeepCoder-14B-Preview",
"base_model:quantized:agentica-org/DeepCoder-14B-Preview",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"region:us"
] | text-generation | 2025-06-16T02:58:44Z | ---
license: mit
library_name: transformers
datasets:
- PrimeIntellect/verifiable-coding-problems
- likaixin/TACO-verified
- livecodebench/code_generation_lite
language:
- en
base_model: agentica-org/DeepCoder-14B-Preview
pipeline_tag: text-generation
tags:
- mlx
---
# ALYTV/DeepCoder-14B-Preview-mlx-3Bit
The Model [ALYTV/DeepCoder-14B-Preview-mlx-3Bit](https://huggingface.co/ALYTV/DeepCoder-14B-Preview-mlx-3Bit) was converted to MLX format from [agentica-org/DeepCoder-14B-Preview](https://huggingface.co/agentica-org/DeepCoder-14B-Preview) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ALYTV/DeepCoder-14B-Preview-mlx-3Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Daxiao123/test_model | Daxiao123 | 2025-06-16T02:58:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T02:58:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hussamaldeen/Hussamdn | hussamaldeen | 2025-06-16T02:57:45Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T02:29:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: hussamdn
---
# Hussamdn
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `hussamdn` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "hussamdn",
"lora_weights": "https://huggingface.co/hussamaldeen/Hussamdn/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('hussamaldeen/Hussamdn', weight_name='lora.safetensors')
image = pipeline('hussamdn').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/hussamaldeen/Hussamdn/discussions) to add images that show off what youโve made with this LoRA.
|
KevinG/Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_50_no_KL | KevinG | 2025-06-16T02:56:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:49:58Z | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_50_no_KL
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_50_no_KL
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KevinG/Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_50_no_KL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sleeepeer-penn-state/huggingface/runs/ddh9osqf)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ALYTV/zeta-mlx-6Bit | ALYTV | 2025-06-16T02:54:57Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"dataset:zed-industries/zeta",
"base_model:zed-industries/zeta",
"base_model:quantized:zed-industries/zeta",
"license:apache-2.0",
"6-bit",
"region:us"
] | null | 2025-06-16T02:54:35Z | ---
datasets:
- zed-industries/zeta
license: apache-2.0
base_model: zed-industries/zeta
tags:
- mlx
---
# ALYTV/zeta-mlx-6Bit
The Model [ALYTV/zeta-mlx-6Bit](https://huggingface.co/ALYTV/zeta-mlx-6Bit) was converted to MLX format from [zed-industries/zeta](https://huggingface.co/zed-industries/zeta) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ALYTV/zeta-mlx-6Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.15_0.15_epoch2 | MinaMila | 2025-06-16T02:52:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:50:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
glif-loradex-trainer/Angelo-ec24_d00d | glif-loradex-trainer | 2025-06-16T02:47:57Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2025-06-16T02:47:43Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1750041998772__000000500_0.jpg
text: d00d columns
- output:
url: samples/1750042024106__000000500_1.jpg
text: d00d top
- output:
url: samples/1750042049468__000000500_2.jpg
text: d00d messy
base_model: black-forest-labs/FLUX.1-dev
trigger: "d00d"
instance_prompt: "d00d"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# d00d
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Angelo-ec24`.
<Gallery />
## Trigger words
You should use `d00d` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/Angelo-ec24_d00d/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.15_0.15_epoch1 | MinaMila | 2025-06-16T02:44:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:42:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuichi1218/Llama-3.1-tetun-8B-instruct-e3 | Yuichi1218 | 2025-06-16T02:44:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:39:12Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Yuichi1218
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chnaaam/minimaxi | chnaaam | 2025-06-16T02:42:07Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"ovis",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-10T16:03:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oddadmix/whisper-large-egyptian-3 | oddadmix | 2025-06-16T02:39:49Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"whisper",
"en",
"base_model:unsloth/whisper-large-v3",
"base_model:finetune:unsloth/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T02:39:48Z | ---
base_model: unsloth/whisper-large-v3
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** oddadmix
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-large-v3
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
picard47at/punctuation_1350_1.7B_levdist | picard47at | 2025-06-16T02:39:07Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-11T03:23:38Z | ---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
library_name: transformers
model_name: punctuation_1350_1.7B_levdist
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for punctuation_1350_1.7B_levdist
This model is a fine-tuned version of [unsloth/qwen3-1.7b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-1.7b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="picard47at/punctuation_1350_1.7B_levdist", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/picardtseng-pesi/punctuation_1350_1.7B_levdist/runs/sv9hcyd7)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
gradientrouting-spar/mc14_badmed_kl_div_dsd-5_msd-5_beta_kl-3_seed_1_epoch_1 | gradientrouting-spar | 2025-06-16T02:35:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T02:35:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
player1537/Jan-nano-4bit | player1537 | 2025-06-16T02:35:31Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Menlo/Jan-nano",
"base_model:quantized:Menlo/Jan-nano",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2025-06-16T02:24:08Z | ---
license: apache-2.0
base_model: Menlo/Jan-nano
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# player1537/Jan-nano-4bit
This model [player1537/Jan-nano-4bit](https://huggingface.co/player1537/Jan-nano-4bit) was
converted to MLX format from [Menlo/Jan-nano](https://huggingface.co/Menlo/Jan-nano)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("player1537/Jan-nano-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
dgambettaphd/M_llm2_run2_gen7_WXS_doc1000_synt64_lr1e-04_acm_FRESH | dgambettaphd | 2025-06-16T02:31:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T02:31:16Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
actuator-x/Llama-3.2-3B-Tele-Q8_0-GGUF | actuator-x | 2025-06-16T02:28:28Z | 0 | 0 | null | [
"gguf",
"nlp",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:AliMaatouk/Llama-3.2-3B-Tele",
"base_model:quantized:AliMaatouk/Llama-3.2-3B-Tele",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:28:14Z | ---
license: llama3.2
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- llama-cpp
- gguf-my-repo
base_model: AliMaatouk/Llama-3.2-3B-Tele
---
# actuator-x/Llama-3.2-3B-Tele-Q8_0-GGUF
This model was converted to GGUF format from [`AliMaatouk/Llama-3.2-3B-Tele`](https://huggingface.co/AliMaatouk/Llama-3.2-3B-Tele) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AliMaatouk/Llama-3.2-3B-Tele) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo actuator-x/Llama-3.2-3B-Tele-Q8_0-GGUF --hf-file llama-3.2-3b-tele-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo actuator-x/Llama-3.2-3B-Tele-Q8_0-GGUF --hf-file llama-3.2-3b-tele-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo actuator-x/Llama-3.2-3B-Tele-Q8_0-GGUF --hf-file llama-3.2-3b-tele-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo actuator-x/Llama-3.2-3B-Tele-Q8_0-GGUF --hf-file llama-3.2-3b-tele-q8_0.gguf -c 2048
```
|
johngreendr1/510c9729-38c6-4879-b553-91ed355f29a4 | johngreendr1 | 2025-06-16T02:27:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"region:us"
] | null | 2025-06-15T20:53:07Z | ---
base_model: defog/sqlcoder-7b-2
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Yuichi1218/Llama-3.1-tetun-8B-instruct-e1 | Yuichi1218 | 2025-06-16T02:27:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T02:22:53Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Yuichi1218
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-tutorial-a2z-jankari-viral-video/FULL.VIDEO.a2z.jankari.Viral.Video.Tutorial.Official | New-tutorial-a2z-jankari-viral-video | 2025-06-16T02:27:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T02:25:59Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
BootesVoid/cmby9kccd034zrdqsdcojt2un_cmbyfg7nh03idrdqs767cnkrw | BootesVoid | 2025-06-16T02:26:15Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T02:26:13Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MODELBODY
---
# Cmby9Kccd034Zrdqsdcojt2Un_Cmbyfg7Nh03Idrdqs767Cnkrw
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MODELBODY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MODELBODY",
"lora_weights": "https://huggingface.co/BootesVoid/cmby9kccd034zrdqsdcojt2un_cmbyfg7nh03idrdqs767cnkrw/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmby9kccd034zrdqsdcojt2un_cmbyfg7nh03idrdqs767cnkrw', weight_name='lora.safetensors')
image = pipeline('MODELBODY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmby9kccd034zrdqsdcojt2un_cmbyfg7nh03idrdqs767cnkrw/discussions) to add images that show off what youโve made with this LoRA.
|
GeorgeGali06/mi-super-modelo | GeorgeGali06 | 2025-06-16T02:25:24Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T02:25:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fangcaotank/task-10-microsoft-Phi-3.5-mini-instruct | fangcaotank | 2025-06-16T02:24:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-06-16T02:23:49Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Delta-Vector/Plesio-32B | Delta-Vector | 2025-06-16T02:23:18Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"mergekit",
"merge",
"rolelplay",
"creative_writing",
"conversational",
"base_model:Delta-Vector/Rei-V1-32B-Base",
"base_model:merge:Delta-Vector/Rei-V1-32B-Base",
"base_model:allura-org/GLM4-32B-Neon-v2",
"base_model:merge:allura-org/GLM4-32B-Neon-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T23:00:39Z | ---
base_model:
- Delta-Vector/Rei-V1-32B-Base
- allura-org/GLM4-32B-Neon-v2
library_name: transformers
tags:
- mergekit
- merge
- rolelplay
- creative_writing
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Plesio-32B</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #e6f3ff 0%, #cce7ff 50%, #b3daff 100%);
color: #1e3a5f;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
}
.container {
margin: 20px;
background: linear-gradient(145deg, rgba(255, 255, 255, 0.9), rgba(240, 248, 255, 0.95));
padding: 30px;
border-radius: 20px;
box-shadow: 0 8px 32px rgba(70, 130, 180, 0.3), 0 4px 16px rgba(0, 123, 191, 0.2);
border: 2px solid rgba(70, 130, 180, 0.4);
position: relative;
backdrop-filter: blur(10px);
}
.container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg, rgba(135, 206, 235, 0.1), rgba(0, 191, 255, 0.1), rgba(70, 130, 180, 0.1));
border-radius: 20px;
z-index: -1;
}
.header h1 {
font-size: 32px;
background: linear-gradient(45deg, #0066cc, #007bff, #4169e1);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 20px 0;
text-align: center;
font-weight: 600;
text-shadow: 0 2px 4px rgba(70, 130, 180, 0.3);
}
.section {
margin-top: 30px;
}
.section h2 {
font-size: 24px;
background: linear-gradient(45deg, #0066cc, #007bff);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
text-align: center;
font-weight: 600;
margin-bottom: 20px;
}
.info p {
color: #1e3a5f;
line-height: 1.8;
font-size: 16px;
}
.info img {
width: 85%;
border-radius: 15px;
margin: 0 auto 15px;
display: block;
box-shadow: 0 8px 25px rgba(70, 130, 180, 0.4);
border: 2px solid rgba(135, 206, 235, 0.5);
}
a {
color: #0066cc;
text-decoration: none;
transition: all 0.3s ease;
font-weight: 500;
}
a:hover {
color: #007bff;
text-shadow: 0 0 8px rgba(0, 191, 255, 0.6);
}
.button {
display: inline-block;
background: linear-gradient(45deg, #87ceeb, #00bfff);
color: #1e3a5f;
padding: 12px 24px;
border-radius: 25px;
cursor: pointer;
text-decoration: none;
transition: all 0.3s ease;
border: 1px solid rgba(70, 130, 180, 0.5);
font-weight: 500;
}
.button:hover {
background: linear-gradient(45deg, #5fb3d4, #0099e6);
box-shadow: 0 4px 15px rgba(70, 130, 180, 0.6);
transform: translateY(-2px);
}
pre {
background: linear-gradient(135deg, rgba(240, 248, 255, 0.8), rgba(230, 243, 255, 0.8));
padding: 20px;
border-radius: 12px;
overflow-x: auto;
border: 1px solid rgba(70, 130, 180, 0.3);
box-shadow: inset 0 2px 4px rgba(70, 130, 180, 0.2);
}
code {
font-family: 'Courier New', monospace;
color: #1e3a5f;
}
.info-card {
background: linear-gradient(145deg, rgba(240, 248, 255, 0.9), rgba(230, 243, 255, 0.9));
border: 2px solid rgba(70, 130, 180, 0.4);
border-radius: 15px;
overflow: hidden;
box-shadow: 0 4px 20px rgba(70, 130, 180, 0.3);
}
.info-header {
background: linear-gradient(135deg, rgba(135, 206, 235, 0.3), rgba(0, 191, 255, 0.2));
padding: 25px;
border-bottom: 1px solid rgba(70, 130, 180, 0.3);
}
.info-header h3 {
background: linear-gradient(45deg, #0066cc, #007bff);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 15px 0;
font-size: 22px;
text-align: center;
font-weight: 600;
}
.model-tags {
display: flex;
gap: 10px;
flex-wrap: wrap;
justify-content: center;
}
.model-tag {
background: linear-gradient(45deg, rgba(70, 130, 180, 0.4), rgba(0, 191, 255, 0.3));
color: #1e3a5f;
padding: 8px 16px;
border-radius: 20px;
font-size: 13px;
border: 1px solid rgba(70, 130, 180, 0.5);
font-weight: 500;
box-shadow: 0 2px 8px rgba(70, 130, 180, 0.2);
}
.model-composition {
padding: 25px;
border-bottom: 1px solid rgba(70, 130, 180, 0.3);
}
.model-composition h4 {
background: linear-gradient(45deg, #0066cc, #007bff);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
margin: 0 0 20px 0;
font-size: 18px;
text-align: center;
font-weight: 600;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 15px;
}
.composition-list li {
color: #1e3a5f;
display: flex;
align-items: baseline;
gap: 12px;
padding: 10px;
background: rgba(240, 248, 255, 0.5);
border-radius: 8px;
border-left: 4px solid #87ceeb;
}
.model-component {
font-weight: 600;
min-width: 120px;
}
.model-description {
padding: 25px;
background: linear-gradient(135deg, rgba(255, 255, 255, 0.7), rgba(240, 248, 255, 0.8));
}
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: linear-gradient(145deg, rgba(240, 248, 255, 0.9), rgba(230, 243, 255, 0.9));
border: 2px solid rgba(70, 130, 180, 0.4);
border-radius: 12px;
padding: 20px;
margin-bottom: 20px;
box-shadow: 0 4px 15px rgba(70, 130, 180, 0.2);
}
.metrics-section summary {
background: linear-gradient(45deg, #0066cc, #007bff);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
font-size: 18px;
cursor: pointer;
outline: none;
padding: 8px 0;
text-align: center;
font-weight: 600;
transition: all 0.3s ease;
}
.metrics-section summary:hover {
text-shadow: 0 0 8px rgba(0, 191, 255, 0.6);
}
.creator-section {
margin: 20px 0;
text-align: center;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: linear-gradient(145deg, rgba(240, 248, 255, 0.9), rgba(230, 243, 255, 0.9));
border: 2px solid rgba(70, 130, 180, 0.4);
border-radius: 25px;
padding: 15px 20px;
box-shadow: 0 4px 15px rgba(70, 130, 180, 0.3);
}
.creator-label {
color: #1e3a5f;
font-size: 14px;
margin-right: 10px;
font-weight: 500;
}
.creator-link {
display: flex;
align-items: center;
gap: 8px;
color: #0066cc;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover .creator-arrow {
transform: translateX(4px);
color: #007bff;
}
.creator-link:hover {
color: #007bff;
text-shadow: 0 0 8px rgba(0, 191, 255, 0.6);
}
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
.axolotl-container {
display: flex;
text-align: center;
justify-content: center;
margin: 30px 0;
}
.axolotl-container img {
max-width: 300px;
border-radius: 15px;
box-shadow: 0 6px 20px rgba(70, 130, 180, 0.4);
border: 2px solid rgba(135, 206, 235, 0.5);
transition: transform 0.3s ease;
display: block;
margin: 0 auto;
}
.axolotl-container img:hover {
transform: scale(1.05);
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Plesio-32B</h1>
</div>
<div class="info">
<img src="https://files.catbox.moe/opd2nm.jpg" alt="Model banner">
<div style="text-align: center;">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Delta-Vector" target="_blank" class="creator-link">
<span class="creator-name">Delta-Vector</span>
<span class="creator-arrow">โ</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>Plesio-32B</h3>
<div class="model-tags">
<span class="model-tag">32B parameters</span>
<span class="model-tag">GLM-4 32B</span>
<span class="model-tag">Creative / Fresh Prose</span>
<span class="model-tag">Co-writing/Roleplay/Adventure Generalist</span>
</div>
</div>
<div class="model-description">
<p>Another Series of Merges! Since i could never beat Archaeo-32B-KTO! This time starting off with a GLM merge between Rei and Neon (thanks auri!!!)</p>
<p>Using the Oh-so-great 0.2 Slerp merge weight with Neon as the Base.</p>
<p>Support me on Ko-Fi: https://ko-fi.com/deltavector</p>
</div>
</div>
</div>
<div class="section">
<h2>Quantized Versions</h2>
<div class="info-card">
<div class="model-composition">
<h4>Available Downloads</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="" target="_blank">GGUF Format</a></span>For use with LLama.cpp & Forks(Coming Soon!)</li>
<li><span class="model-component"><a href="" target="_blank">EXL2 Format</a></span>For use with TabbyAPI (Coming Soon!)</li>
<li><span class="model-component"><a href="https://huggingface.co/Delta-Vector/Plesio-32B-exl3" target="_blank">EXL3 Format</a></span>For use with TabbyAPI (Slower on Ampere))</li>
</ul>
</div>
</div>
</div>
<div class="section">
<h2>Prompting</h2>
<p>Model has been tuned with the GLM-4 formatting.</p>
</div>
<div class="section">
<h2>Samplers</h2>
<p>For testing of this model, I used Temp=1, 0.1 Min-P.</p>
<div class="metrics-section">
<details>
<summary>See Merging Config</summary>
<pre>https://files.catbox.moe/j9kyfy.yml<code>
</code></pre>
</details>
</div>
</div>
<div class="section">
<h2>Credits</h2>
<p>Thank you to <a href="https://huggingface.co/lucyknada">Lucy Knada</a>, <a href="https://huggingface.co/AuriAetherwiing">Auri</a>, <a href="https://huggingface.co/Ateron">Ateron</a>, <a href="https://huggingface.co/AliCat2">Alicat</a>, <a href="https://huggingface.co/intervitens">Intervitens</a>, <a href="https://huggingface.co/cgato">Cgato</a>, <a href="https://huggingface.co/kubernetes-bad">Kubernetes Bad</a> and the rest of <a href="https://huggingface.co/anthracite-org">Anthracite</a>.</p>
</div>
</div>
</div>
</body>
</html> |
Delta-Vector/Plesio-32B-exl3 | Delta-Vector | 2025-06-16T02:22:48Z | 0 | 0 | transformers | [
"transformers",
"base_model:Delta-Vector/Plesio-32B",
"base_model:finetune:Delta-Vector/Plesio-32B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-13T19:12:12Z | ---
base_model:
- Delta-Vector/Plesio-32B
library_name: transformers
---
### exl3 quant
---
### check revisions for quants
--- |
player1537/Jan-nano-8bit | player1537 | 2025-06-16T02:19:28Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Menlo/Jan-nano",
"base_model:quantized:Menlo/Jan-nano",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-06-15T15:56:18Z | ---
license: apache-2.0
base_model: Menlo/Jan-nano
pipeline_tag: text-generation
tags:
- mlx
library_name: mlx
---
# player1537/Jan-nano-8bit
This model [player1537/Jan-nano-8bit](https://huggingface.co/player1537/Jan-nano-8bit) was
converted to MLX format from [Menlo/Jan-nano](https://huggingface.co/Menlo/Jan-nano)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("player1537/Jan-nano-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mlx-community/llm-jp-3.1-1.8b-instruct4 | mlx-community | 2025-06-16T02:18:03Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"base_model:llm-jp/llm-jp-3.1-1.8b-instruct4",
"base_model:finetune:llm-jp/llm-jp-3.1-1.8b-instruct4",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T02:12:00Z | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: mlx
inference: false
tags:
- mlx
base_model: llm-jp/llm-jp-3.1-1.8b-instruct4
---
# mlx-community/llm-jp-3.1-1.8b-instruct4
This model [mlx-community/llm-jp-3.1-1.8b-instruct4](https://huggingface.co/mlx-community/llm-jp-3.1-1.8b-instruct4) was
converted to MLX format from [llm-jp/llm-jp-3.1-1.8b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-1.8b-instruct4)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-3.1-1.8b-instruct4")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Baselhany/Graduation_Project_Distil_Whisper_base | Baselhany | 2025-06-16T02:16:43Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-02T11:09:34Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper base AR - BA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - BA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1877
- Wer: 0.4111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 6.1064 | 1.0 | 469 | 0.1923 | 0.4788 |
| 5.2469 | 2.0 | 938 | 0.1903 | 0.4733 |
| 3.4127 | 3.0 | 1407 | 0.1934 | 0.4941 |
| 2.6938 | 4.0 | 1876 | 0.1795 | 0.4665 |
| 2.1423 | 5.0 | 2345 | 0.1849 | 0.4469 |
| 1.7651 | 6.0 | 2814 | 0.1772 | 0.4403 |
| 1.6011 | 7.0 | 3283 | 0.1775 | 0.4048 |
| 1.3887 | 8.0 | 3752 | 0.1756 | 0.4032 |
| 1.2493 | 9.0 | 4221 | 0.1735 | 0.4060 |
| 1.0678 | 10.0 | 4690 | 0.1744 | 0.4228 |
| 1.0113 | 11.0 | 5159 | 0.1704 | 0.4175 |
| 0.9464 | 12.0 | 5628 | 0.1725 | 0.4263 |
| 0.8923 | 13.0 | 6097 | 0.1711 | 0.4159 |
| 0.8727 | 14.0 | 6566 | 0.1720 | 0.4204 |
| 0.769 | 14.9685 | 7020 | 0.1719 | 0.4268 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
EasonXiao-888/MindOmni | EasonXiao-888 | 2025-06-16T02:14:57Z | 56 | 1 | diffusers | [
"diffusers",
"safetensors",
"any-to-any",
"arxiv:2505.13031",
"license:apache-2.0",
"region:us"
] | any-to-any | 2025-06-06T15:54:39Z | ---
license: apache-2.0
pipeline_tag: any-to-any
---
This repository contains the MindOmni model described in the paper [MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO](https://huggingface.co/papers/2505.13031).
Code: https://github.com/TencentARC/MindOmni |
gradientrouting-spar/standard_notMerged_seed_1_seed_2_seed_25_20250616_013923 | gradientrouting-spar | 2025-06-16T02:14:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T02:14:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-Pakistani-TikTok-Videos/FULL.VIDEO.Pakistani.TikTok.Viral.Video.Tutorial.Official | New-tutorial-Pakistani-TikTok-Videos | 2025-06-16T02:13:07Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T02:12:48Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
gradientrouting-spar/horizontal_2_proxy_ntrain_25_ntrig_9_animals_3x3_seed_1_20250616_020232 | gradientrouting-spar | 2025-06-16T02:10:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T02:10:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oceanmall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_noisy_aardvark | oceanmall | 2025-06-16T02:10:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rabid noisy aardvark",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-12T14:47:09Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_noisy_aardvark
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rabid noisy aardvark
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_noisy_aardvark
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="oceanmall/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_noisy_aardvark", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Daemontatox/Grifflet-2 | Daemontatox | 2025-06-16T02:05:58Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:cognitivecomputations/dolphin-r1",
"dataset:open-thoughts/OpenThoughts2-1M",
"dataset:open-r1/Mixture-of-Thoughts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-05T17:04:10Z | ---
base_model: qwen/qwen3-8b
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
datasets:
- cognitivecomputations/dolphin-r1
- open-thoughts/OpenThoughts2-1M
- open-r1/Mixture-of-Thoughts
library_name: transformers
new_version: qwen/qwen3-8b
---

## **Model Description**
### **Purpose**
"Daemontatox/Grifflet-2" is a state-of-the-art language model designed to excel in hybrid tasks that combine conversational abilities with reasoning capabilities. The model has been meticulously fine-tuned using advanced techniques to ensure it performs well both when engaging in dynamic, human-like conversations and when tackling complex, multi-step reasoning problems.
### **Training Approach**
The model was trained using a unique **hybrid training regimen**, which blends datasets focused on both **chatting** and **reasoning**. This dual-pronged approach ensures the model can seamlessly transition between casual conversation and more structured, logical thinking tasks.
Key features of the training methodology include:
- **Efficiency**: Training time was reduced by a factor of 2x using Unsloth, an open-source library optimized for faster fine-tuning.
- **Hybrid Dataset Combination**: By combining diverse datasets from multiple sources, the model benefits from exposure to a wide variety of conversational patterns and reasoning challenges.
- **Advanced Fine-Tuning**: Leveraging Hugging Faceโs TRL (Transformer Reinforcement Learning) library, the model underwent supervised fine-tuning followed by reinforcement learning steps to refine its outputs.
---
## **Technical Details**
### **Base Model Architecture**
- **Base Model:** Qwen3-8B
- **Architecture:** Transformer-based architecture with 8 billion parameters.
- **Language:** English (`en`)
- **Library Used:** [Transformers](https://huggingface.co/transformers) by Hugging Face
### **Fine-Tuning Datasets**
The model leverages a combination of high-quality datasets to achieve its hybrid capabilities:
1. **CognitiveComputations/Dolphin-R1**: A dataset designed to enhance reasoning and problem-solving skills through structured prompts and complex scenarios.
2. **Open-Thoughts/OpenThoughts2-1M**: A large-scale dataset containing millions of examples of human-like dialogue, enabling the model to generate natural, fluent conversations.
3. **Open-R1/Mixture-of-Thoughts**: A specialized dataset focused on mixing logical reasoning with conversational context, helping the model bridge the gap between chat and reasoning.
### **Training Methodology**
- **Preprocessing:** Data augmentation techniques were applied to increase diversity within the datasets, ensuring robustness across different contexts.
- **Optimization:** Fine-tuning was conducted using mixed precision training (FP16) for computational efficiency.
- **Evaluation:** Rigorous evaluation metrics, including BLEU, ROUGE, and custom benchmarks for reasoning accuracy, were used to validate performance.
---
## **Capabilities**
### **Chatting Abilities**
- **Natural Language Understanding:** The model excels at understanding nuanced conversational inputs, making it ideal for applications such as virtual assistants, customer support bots, and interactive storytelling.
- **Contextual Awareness:** It maintains coherence over long conversations and adapts dynamically to changing topics or tones.
- **Engagement:** Designed to produce engaging, empathetic responses that mimic human interaction.
### **Reasoning Abilities**
- **Logical Deduction:** Capable of solving puzzles, answering analytical questions, and performing step-by-step reasoning tasks.
- **Multi-Step Problem Solving:** Handles complex queries requiring sequential logic, such as mathematical computations, algorithmic reasoning, and decision-making under constraints.
- **Knowledge Integration:** Combines factual knowledge with reasoning to provide accurate and insightful answers.
---
## **Intended Use Cases**
### **Primary Applications**
1. **Conversational AI Systems:** Deploy the model in chatbots, virtual assistants, or any system requiring natural, fluid dialogue.
2. **Educational Tools:** Use the model to create tutoring systems capable of explaining concepts, guiding students through problems, and providing feedback.
3. **Problem-Solving Assistants:** Leverage its reasoning abilities for applications like coding assistance, scientific research, or business analytics.
### **Secondary Applications**
- Content generation (e.g., writing essays, articles, or creative pieces).
- Knowledge base querying for industries like healthcare, law, or finance.
- Game development (e.g., creating intelligent NPCs with reasoning capabilities).
---
## **Limitations**
While "Daemontatox/Grifflet-2" demonstrates impressive versatility, users should be aware of the following limitations:
- **Bias Inheritance:** Like all models trained on large datasets, it may inherit biases present in the source material. Careful monitoring is recommended for sensitive use cases.
- **Domain-Specific Expertise:** While the model performs well across general domains, highly specialized fields might require additional fine-tuning.
- **Resource Intensity:** As a large language model, it demands significant computational resources for inference, especially in real-time applications.
---
## **Ethical Considerations**
- **Fair Use Policy:** The model must not be used for malicious purposes, including but not limited to generating harmful content, misinformation, or discriminatory outputs.
- **Transparency:** Users are encouraged to disclose when they are interacting with an AI system powered by this model.
- **Data Privacy:** Ensure compliance with data protection regulations (e.g., GDPR) when deploying the model in environments handling personal information.
---
## **How to Use**
### **Installation**
To use "Daemontatox/Grifflet-2," install the necessary libraries and load the model via Hugging Face's `transformers` library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Daemontatox/Grifflet-2")
model = AutoModelForCausalLM.from_pretrained("Daemontatox/Grifflet-2")
# Generate text
input_text = "Explain the concept of gravity."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### **Hardware Requirements**
- Recommended: GPU with at least 24GB VRAM (e.g., NVIDIA A100 or similar).
- Minimum: CPU with sufficient RAM for smaller batch sizes.
---
## **Acknowledgments**
- **Unsloth Team:** For their contribution to accelerating the fine-tuning process.
- **Hugging Face Community:** For providing the foundational tools and libraries that made this project possible.
- **Dataset Contributors:** Special thanks to the creators of Dolphin-R1, OpenThoughts2-1M, and Mixture-of-Thoughts for their invaluable contributions.
---
## **Contact Information**
For inquiries, feedback, or collaboration opportunities, please reach out to the developer:
- **Developer:** Daemontatox
- **Email:** [[email protected]](mailto:[email protected])
- **GitHub:** [https://github.com/Daemontatox](https://github.com/Daemontatox)
--- |
New-tutorial-Sophie-Rain-Viral-Videos/FULL.VIDEO.Sophie.Rain.Spiderman.Viral.Video.Tutorial.Official | New-tutorial-Sophie-Rain-Viral-Videos | 2025-06-16T02:04:56Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T02:04:24Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
maily101102/trackingCar | maily101102 | 2025-06-16T02:04:41Z | 0 | 0 | null | [
"safetensors",
"gguf",
"llama",
"unsloth",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T00:51:13Z | ---
license: apache-2.0
tags:
- unsloth
---
|
BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbyfdvbe03i5rdqsnp2il175 | BootesVoid | 2025-06-16T02:02:57Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T02:02:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMILYY
---
# Cmbyblijq03Bvrdqs0Ce71Ulk_Cmbyfdvbe03I5Rdqsnp2Il175
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMILYY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMILYY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbyfdvbe03i5rdqsnp2il175/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbyfdvbe03i5rdqsnp2il175', weight_name='lora.safetensors')
image = pipeline('EMILYY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyblijq03bvrdqs0ce71ulk_cmbyfdvbe03i5rdqsnp2il175/discussions) to add images that show off what youโve made with this LoRA.
|
rmdhirr/suja-lorab-ep5-suja-5000 | rmdhirr | 2025-06-16T02:02:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:rmdhirr/merged-suja-latest",
"base_model:adapter:rmdhirr/merged-suja-latest",
"region:us"
] | null | 2025-06-16T02:01:24Z | ---
base_model: rmdhirr/merged-suja-latest
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_seed_2_seed_42_20250616_015359 | gradientrouting-spar | 2025-06-16T02:02:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T02:01:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
michelescotto/trainer_output | michelescotto | 2025-06-16T02:02:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T02:01:21Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: trainer_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1910
- Accuracy: 0.948
- F1 Macro: 0.8547
- Kappa Score: 0.7094
- Accuracy Balanced: 0.8568
- Precision Macro: 0.8526
- Recall Macro: 0.8568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.8e-05
- train_batch_size: 16
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
panyupj/zephyr-7b-sft-qlora | panyupj | 2025-06-16T02:00:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-15T14:35:32Z | ---
base_model: mistralai/Mistral-7B-v0.1
datasets:
- HuggingFaceH4/ultrachat_200k
library_name: transformers
model_name: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for mistralai/Mistral-7B-v0.1
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [['HuggingFaceH4/ultrachat_200k']](https://huggingface.co/datasets/['HuggingFaceH4/ultrachat_200k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="panyupj/zephyr-7b-sft-qlora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.1.2
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Enzogbs/Reinforce-1 | Enzogbs | 2025-06-16T02:00:18Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T02:00:06Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CriteriaPO/qwen2.5-3b-dpo-coarse-no-tools | CriteriaPO | 2025-06-16T01:59:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:CriteriaPO/qwen2.5-3b-sft-10",
"base_model:finetune:CriteriaPO/qwen2.5-3b-sft-10",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-09T23:23:50Z | ---
base_model: CriteriaPO/qwen2.5-3b-sft-10
library_name: transformers
model_name: qwen2.5-3b-dpo-coarse-no-tools
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-3b-dpo-coarse-no-tools
This model is a fine-tuned version of [CriteriaPO/qwen2.5-3b-sft-10](https://huggingface.co/CriteriaPO/qwen2.5-3b-sft-10).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CriteriaPO/qwen2.5-3b-dpo-coarse-no-tools", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bborges/CriteriaPreferences/runs/neggxd33)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.1.2+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ielabgroup/bert-base-uncased-fineweb100bt-smae-width | ielabgroup | 2025-06-16T01:57:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-16T01:55:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/alanyom-ACT_BBOX-lerobot_rrwra_data-hya9v | phospho-app | 2025-06-16T01:57:26Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-16T01:32:05Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/lerobot_rrwra_data_bboxes](https://huggingface.co/datasets/phospho-app/lerobot_rrwra_data_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.15_0.75_epoch1 | MinaMila | 2025-06-16T01:56:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T01:54:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-minahil-malik-Viral-Videos/FULL.VIDEO.minahil.malik.Viral.Video.Tutorial.Official | New-tutorial-minahil-malik-Viral-Videos | 2025-06-16T01:56:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T01:56:20Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
imperatrona/jib_mix_realistic_xl_v17_safetensor | imperatrona | 2025-06-16T01:54:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T01:26:25Z | ---
license: creativeml-openrail-m
---
|
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_seed_2_20250616_014540 | gradientrouting-spar | 2025-06-16T01:53:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T01:53:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yquemener/MUAL-Vision | yquemener | 2025-06-16T01:53:09Z | 0 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-06-16T01:40:03Z | ---
license: cc-by-sa-4.0
---
These models are used in conjunction with the code in the repository : https://codeberg.org/yquemener/mual-redo
They are used to provide the [MUAL](mual.fr) robots with visual information. They consist of two files: `exp*.pt` which is a fine-tuned YOLOv5s model and `rails*.pt` which is a custom model, made from the UNet architecture, and that is used as a pre-processing to a houfh lines detector in order to indentify aluminum rails in an image.
 |
danaash/mullari_style_LoRA | danaash | 2025-06-16T01:52:28Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T01:52:24Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: drawing in Sveta Mullari style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - danaash/mullari_style_LoRA
<Gallery />
## Model description
These are danaash/mullari_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use drawing in Sveta Mullari style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](danaash/mullari_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ncgc/pythia_125M_sft_hh_full_sft_trainer_rand_highest | ncgc | 2025-06-16T01:52:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-16T00:27:40Z | ---
base_model: EleutherAI/pythia-125M
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
ncgc/pythia_125M_sft_hh_full_sft_trainer_rand_lowest | ncgc | 2025-06-16T01:51:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-15T22:47:45Z | ---
base_model: EleutherAI/pythia-125M
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_seed_25_20250616_013717 | gradientrouting-spar | 2025-06-16T01:45:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T01:45:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm | BootesVoid | 2025-06-16T01:42:57Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T01:42:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VELLAGIRL
---
# Cmbs7Qa4E0517H4X59Dp33Vpm_Cmbyebxmd03Gvrdqslli40Ghm
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VELLAGIRL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "VELLAGIRL",
"lora_weights": "https://huggingface.co/BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm', weight_name='lora.safetensors')
image = pipeline('VELLAGIRL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbs7qa4e0517h4x59dp33vpm_cmbyebxmd03gvrdqslli40ghm/discussions) to add images that show off what youโve made with this LoRA.
|
7-VIDEOS-18-parveen-viral-video/wATCH.parveen.Viral.Video.Original.Full.Video.Link.Official | 7-VIDEOS-18-parveen-viral-video | 2025-06-16T01:42:56Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-16T01:42:36Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.25_0.05_epoch1 | MinaMila | 2025-06-16T01:40:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T01:38:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aquiffoo/aquif-neo-2-345m-c2 | aquiffoo | 2025-06-16T01:40:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"language",
"aquif",
"text-generation-inference",
"math",
"coding",
"small",
"en",
"dataset:sahil2801/CodeAlpaca-20k",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:reedmayhew/gpt-4.5-100x",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-16T01:22:15Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- aquif
- gpt2
- text-generation-inference
- math
- coding
- small
language:
- en
datasets:
- sahil2801/CodeAlpaca-20k
- iamtarun/python_code_instructions_18k_alpaca
- reedmayhew/gpt-4.5-100x
---
# aquif-neo-2-345m-c2
This is the second checkpoint of the 'aquif-neo-2-345m' model, a next-generation language model developed by aquif AI. This checkpoint builds upon the foundation established in c1, with enhanced fine-tuning focused on code generation and programming tasks, representing the second stage in a 5-checkpoint training process designed to create a versatile and capable model.
## Model Details
**Base Model**: gpt2-medium
**Method**: LoRA (Low-Rank Adaptation)
**Parameter Count**: 355 million params
## Training Information
This checkpoint was trained as the second stage of a multi-checkpoint process, specifically focusing on improving coding capabilities. The training was performed using a network-resilient script that includes fallback mechanisms for data loading and model initialization.
**Checkpoint Number**: 2/5
**Focus Area**: Code generation and programming tasks
**Hardware**: Trained on a Google Colab T4 GPU
**Training Duration**: Approximately 2.5 hours for this checkpoint
**Training Framework**: PyTorch, Hugging Face Transformers, PEFT, bitsandbytes, TRL
**Quantization**: 8-bit
**Training Datasets**:
- CodeAlpaca-20k: 4,000 samples for code generation tasks
- Python Code Instructions: 2,000 samples for Python-specific programming
- GPT-4.5 100x: Complete dataset for enhanced writing and factual accuracy
## LoRA Configuration:
- r=8
- lora_alpha=16
- target_modules: ["q_attn", "c_attn", "c_proj", "c_fc", "attn.c_attn", "attn.c_proj", "mlp.c_fc", "mlp.c_proj"]
- lora_dropout=0.05
- bias="none"
- task_type="CAUSAL_LM"
## Training Arguments:
- per_device_train_batch_size=2
- gradient_accumulation_steps=16
- num_train_epochs=1 (for this checkpoint)
- learning_rate=1e-5
- max_steps=400
*Optimized for 8-bit training.*
## Training Loss Data
The following table shows the training loss recorded during the training of this checkpoint:
| Step | Training Loss |
|------|---------------|
| 20 | 1.9971 |
| 40 | 1.9395 |
| 60 | 1.9527 |
| 80 | 1.8974 |
| 100 | 1.9324 |
| 120 | 1.8383 |
| 140 | 1.8405 |
| 160 | 1.7731 |
| 180 | 1.7920 |
| 200 | 1.6615 |
| 220 | 1.6679 |
| 240 | 1.6748 |
| 260 | 1.6765 |
| 280 | 1.6969 |
| 300 | 1.6478 |
| 320 | 1.6690 |
| 340 | 1.6214 |
| 360 | 1.5797 |
| 380 | 1.6272 |
| 400 | 1.6243 |
*Note: Training loss is a metric that indicates how well the model is learning. A decreasing loss generally suggests improvement. C2 shows significantly lower loss values compared to C1, indicating improved learning on the code-focused dataset.*
## Training Focus
Checkpoint 2 emphasizes code-related tasks and programming instruction following. The synthetic dataset for this checkpoint includes:
- Python function writing and implementation
- List comprehensions and data structures
- String manipulation techniques
- Algorithm explanations and code examples
This targeted approach builds upon the conversational foundation from c1 while developing stronger programming capabilities.
## Intended Use
This checkpoint is an intermediate model in the development of the full 'aquif-neo-2'. While showing improved coding capabilities compared to c1, it is not intended for production use but serves as a foundation for subsequent fine-tuning checkpoints focusing on additional domains and tasks.
## How to Load the Model
You can load this model using the Hugging Face 'transformers' library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "aquiffoo/aquif-neo-2-345m-c2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
## Future Checkpoints
This is the second of 5 planned checkpoints. The remaining checkpoints will focus on:
- c3: Mathematical reasoning and problem-solving
- c4: Knowledge and factual understanding
- c5: Mixed capabilities integration and refinement
Each checkpoint builds upon the previous ones to create a more well-rounded and capable model.
**License**: Apache 2.0 |
cashmerepancake/ppo-LunarLander-v2-2 | cashmerepancake | 2025-06-16T01:39:38Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-16T01:39:32Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -115.44 +/- 43.71
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'gym_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 59000
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'ppo-implementation-details'
'wandb_entity': None
'capture_video': False
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'cashmerepancake/ppo-LunarLander-v2-2'
'env_id': 'LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
gradientrouting-spar/standard_notMerged_seed_1_seed_2_20250616_010429 | gradientrouting-spar | 2025-06-16T01:39:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T01:39:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/horizontal_1_proxy_ntrain_25_ntrig_9_negative_3x3_seed_1_20250616_012856 | gradientrouting-spar | 2025-06-16T01:37:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T01:36:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cashmerepancake/ppo-LunarLander-v2 | cashmerepancake | 2025-06-16T01:35:37Z | 18 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-10T03:20:29Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 108.70 +/- 62.55
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'gym_id': 'CartPole-v1'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 25000
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'ppo-implementation-details'
'wandb_entity': None
'capture_video': False
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'cashmerepancake/ppo-LunarLander-v2'
'env_id': 'LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
thomas-sounack/BioClinical-ModernBERT-large | thomas-sounack | 2025-06-16T01:33:26Z | 30 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"modernbert",
"fill-mask",
"masked-lm",
"long-context",
"BioClinical-ModernBERT",
"en",
"arxiv:2506.10896",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-07T15:56:40Z | ---
license: mit
language:
- en
base_model:
- answerdotai/ModernBERT-large
pipeline_tag: fill-mask
library_name: transformers
tags:
- fill-mask
- masked-lm
- long-context
- modernbert
- BioClinical-ModernBERT
---
# BioClinical ModernBERT
*BioClinical ModernBERT is available in two sizes: [base](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-base) (150M parameters) and [large](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-large) (396M parameters). The model training checkpoints can be found [here](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-checkpoints), and our code is available in our [GitHub repository](https://github.com/lindvalllab/BioClinical-ModernBERT).*
## Table of Contents
1. [Model Summary](#model-summary)
2. [Usage](#usage)
3. [Training](#training)
4. [Evaluation](#evaluation)
5. [License](#license)
6. [Citation](#citation)
## Model Summary
BioClinical ModernBERT is a domain-adapted encoder that builds on ModernBERT [base](https://huggingface.co/answerdotai/ModernBERT-base) and [large](https://huggingface.co/answerdotai/ModernBERT-large), incorporating long-context processing and substantial improvements in speed and performance for biomedical and clinical NLP. BioClinical ModernBERT is trained on the largest biomedical and clinical corpus to date, with over 53.5 billion tokens, and addresses a key limitation of prior clinical encoders by leveraging 20 datasets from diverse institutions, domains, and geographic regions, rather than relying on data from a single source.
## Usage
You can use these models directly with the `transformers` library starting from v4.48.0:
```sh
pip install -U transformers>=4.48.0
```
Since BioClinical ModernBERT is a Masked Language Model (MLM), you can use the `fill-mask` pipeline or load it via `AutoModelForMaskedLM`. To use BioClinical ModernBERT for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes.
**โ ๏ธ If your GPU supports it, we recommend using BioClinical ModernBERT with Flash Attention 2 to reach the highest efficiency. To do so, install Flash Attention as follows, then use the model as normal:**
```bash
pip install flash-attn
```
Using `AutoModelForMaskedLM`:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "thomas-sounack/BioClinical-ModernBERT-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
text = "Mitochondria is the powerhouse of the [MASK]."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
# To get predictions for the mask:
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
# Predicted token: cell
```
Using a pipeline:
```python
import torch
from transformers import pipeline
from pprint import pprint
pipe = pipeline(
"fill-mask",
model="thomas-sounack/BioClinical-ModernBERT-large",
torch_dtype=torch.bfloat16,
)
input_text = "[MASK] is a disease caused by an uncontrolled division of abnormal cells in a part of the body."
results = pipe(input_text)
pprint(results)
```
**Note:** BioClinical ModernBERT, similarly to ModernBERT, does not use token type IDs unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the `token_type_ids` parameter.
## Training
### Data
BioClinical ModernBERT is trained on 50.7B tokens of biomedical text gathered from PubMed and PMC, and 2.8B tokens of clinical text from 20 datasets which are detailed in the table below.
| Name | Country | Clinical Source | Clinical Context | Samples | Tokens (M) |
|----------------------------|--------------|------------------------------------|-----------------------|-----------|------------|
| ACI-BENCH | US | Clinical Notes | Not Reported | 207 | 0.1 |
| ADE Corpus | Several | Clinical Notes | Not Reported | 20,896 | 0.5 |
| Brain MRI Stroke | Korea | Radiology Reports | Neurology | 2,603 | 0.2 |
| CheXpert Plus | US | Radiology Reports | Pulmonology | 223,460 | 60.6 |
| CHIFIR | Australia | Pathology Reports | Hematology / Oncology | 283 | 0.1 |
| CORAL | US | Progress Notes | Hematology / Oncology | 240 | 0.7 |
| Eye Gaze CXR | US | Radiology Reports | Pulmonology | 892 | 0.03 |
| Gout Chief Complaints | US | Chief Complaint | Internal Medicine | 8,429 | 0.2 |
| ID-68 | UK | Clinical Notes | Psychology | 78 | 0.02 |
| Inspect | US | Radiology Reports | Pulmonology | 22,259 | 2.8 |
| MedNLI | US | Clinical Notes | Internal Medicine | 14,047 | 0.5 |
| MedQA | US | National Medical Board Examination | Not Reported | 14,366 | 2.0 |
| MIMIC-III | US | Clinical Notes | Internal Medicine | 2,021,411 | 1,047.7 |
| MIMIC-IV Note | US | Clinical Notes | Internal Medicine | 2,631,243 | 1,765.7 |
| MTSamples | Not Reported | Clinical Notes | Internal Medicine | 2,358 | 1.7 |
| Negex | US | Discharge Summaries | Not Reported | 2,056 | 0.1 |
| PriMock57 | UK | Simulated Patient Care | Internal Medicine | 57 | 0.01 |
| Q-Pain | US | Clinical Vignettes | Palliative Care | 51 | 0.01 |
| REFLACX | US | Radiology Reports | Pulmonology | 2,543 | 0.1 |
| Simulated Resp. Interviews | Canada | Simulated Patient Care | Pulmonology | 272 | 0.6 |
### Methodology
BioClinical ModernBERT large is trained in two phases. This model is initialized from the last stable-phase checkpoint of ModernBERT large and trained with the same hyperparameters: learning rate of 5e-5 and batch size of 77.
- Phase 1: Training on 160.5B tokens from PubMed, PMC, and the 20 clinical datasets. Learning rate remains constant throughout this stage, and the masking probability is set at 30%.
- Phase 2: Training on the 20 clinical datasets only. Masking probability is reduced to 15%. The model is trained for 3 epochs using a hybrid schedule: constant learning rate for the first two epochs, followed by a 1-sqrt decay in the final epoch.
## Evaluation
| | Model | Context Length | ChemProt | Phenotype | COS | Social History | DEID |
|-------|--------------------------------|----------------|----------|-----------|----------|----------------|----------|
| Base | BioBERT | 512 | 89.5 | 26.6 | 94.9 | 55.8 | 74.3 |
| | Clinical BERT | 512 | 88.3 | 25.8 | 95.0 | 55.2 | 74.2 |
| | BioMed-RoBERTa | 512 | 89.0 | 36.8 | 94.9 | 55.2 | 81.1 |
| | Clinical-BigBird | 4096 | 87.4 | 26.5 | 94.0 | 53.3 | 71.2 |
| | Clinical-Longformer | 4096 | 74.2 | 46.4 | **95.2** | 56.8 | 82.3 |
| | Clinical ModernBERT | 8192 | 86.9 | 54.9 | 93.7 | 53.8 | 44.4 |
| | ModernBERT - base | 8192 | 89.5 | 48.4 | 94.0 | 53.1 | 78.3 |
| | BioClinical ModernBERT - base | 8192 | 89.9 | 58.1 | 95.1 | **58.5** | 82.7 |
| Large | ModernBERT - large | 8192 | 90.2 | 58.3 | 94.4 | 54.8 | 82.1 |
| | BioClinical ModernBERT - large | 8192 | **90.8** | **60.8** | 95.1 | 57.1 | **83.8** |
## License
We release the BioClinical ModernBERT base and large model weights and training checkpoints under the MIT license.
## Citation
If you use BioClinical ModernBERT in your work, please cite our [preprint](https://arxiv.org/abs/2506.10896):
```
@misc{sounack2025bioclinicalmodernbertstateoftheartlongcontext,
title={BioClinical ModernBERT: A State-of-the-Art Long-Context Encoder for Biomedical and Clinical NLP},
author={Thomas Sounack and Joshua Davis and Brigitte Durieux and Antoine Chaffin and Tom J. Pollard and Eric Lehman and Alistair E. W. Johnson and Matthew McDermott and Tristan Naumann and Charlotta Lindvall},
year={2025},
eprint={2506.10896},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.10896},
}
``` |
thomas-sounack/BioClinical-ModernBERT-base | thomas-sounack | 2025-06-16T01:33:11Z | 115 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"modernbert",
"fill-mask",
"masked-lm",
"long-context",
"BioClinical-ModernBERT",
"en",
"arxiv:2506.10896",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-07T15:54:29Z | ---
license: mit
language:
- en
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: fill-mask
tags:
- fill-mask
- masked-lm
- long-context
- modernbert
- BioClinical-ModernBERT
library_name: transformers
---
# BioClinical ModernBERT
*BioClinical ModernBERT is available in two sizes: [base](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-base) (150M parameters) and [large](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-large) (396M parameters). The model training checkpoints can be found [here](https://huggingface.co/thomas-sounack/BioClinical-ModernBERT-checkpoints), and our code is available in our [GitHub repository](https://github.com/lindvalllab/BioClinical-ModernBERT).*
## Table of Contents
1. [Model Summary](#model-summary)
2. [Usage](#usage)
3. [Training](#training)
4. [Evaluation](#evaluation)
5. [License](#license)
6. [Citation](#citation)
## Model Summary
BioClinical ModernBERT is a domain-adapted encoder that builds on ModernBERT [base](https://huggingface.co/answerdotai/ModernBERT-base) and [large](https://huggingface.co/answerdotai/ModernBERT-large), incorporating long-context processing and substantial improvements in speed and performance for biomedical and clinical NLP. BioClinical ModernBERT is trained on the largest biomedical and clinical corpus to date, with over 53.5 billion tokens, and addresses a key limitation of prior clinical encoders by leveraging 20 datasets from diverse institutions, domains, and geographic regions, rather than relying on data from a single source.
## Usage
You can use these models directly with the `transformers` library starting from v4.48.0:
```sh
pip install -U transformers>=4.48.0
```
Since BioClinical ModernBERT is a Masked Language Model (MLM), you can use the `fill-mask` pipeline or load it via `AutoModelForMaskedLM`. To use BioClinical ModernBERT for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes.
**โ ๏ธ If your GPU supports it, we recommend using BioClinical ModernBERT with Flash Attention 2 to reach the highest efficiency. To do so, install Flash Attention as follows, then use the model as normal:**
```bash
pip install flash-attn
```
Using `AutoModelForMaskedLM`:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "thomas-sounack/BioClinical-ModernBERT-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
text = "Mitochondria is the powerhouse of the [MASK]."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
# To get predictions for the mask:
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
# Predicted token: cell
```
Using a pipeline:
```python
import torch
from transformers import pipeline
from pprint import pprint
pipe = pipeline(
"fill-mask",
model="thomas-sounack/BioClinical-ModernBERT-base",
torch_dtype=torch.bfloat16,
)
input_text = "[MASK] is a disease caused by an uncontrolled division of abnormal cells in a part of the body."
results = pipe(input_text)
pprint(results)
```
**Note:** BioClinical ModernBERT, similarly to ModernBERT, does not use token type IDs unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the `token_type_ids` parameter.
## Training
### Data
BioClinical ModernBERT is trained on 50.7B tokens of biomedical text gathered from PubMed and PMC, and 2.8B tokens of clinical text from 20 datasets which are detailed in the table below.
| Name | Country | Clinical Source | Clinical Context | Samples | Tokens (M) |
|----------------------------|--------------|------------------------------------|-----------------------|-----------|------------|
| ACI-BENCH | US | Clinical Notes | Not Reported | 207 | 0.1 |
| ADE Corpus | Several | Clinical Notes | Not Reported | 20,896 | 0.5 |
| Brain MRI Stroke | Korea | Radiology Reports | Neurology | 2,603 | 0.2 |
| CheXpert Plus | US | Radiology Reports | Pulmonology | 223,460 | 60.6 |
| CHIFIR | Australia | Pathology Reports | Hematology / Oncology | 283 | 0.1 |
| CORAL | US | Progress Notes | Hematology / Oncology | 240 | 0.7 |
| Eye Gaze CXR | US | Radiology Reports | Pulmonology | 892 | 0.03 |
| Gout Chief Complaints | US | Chief Complaint | Internal Medicine | 8,429 | 0.2 |
| ID-68 | UK | Clinical Notes | Psychology | 78 | 0.02 |
| Inspect | US | Radiology Reports | Pulmonology | 22,259 | 2.8 |
| MedNLI | US | Clinical Notes | Internal Medicine | 14,047 | 0.5 |
| MedQA | US | National Medical Board Examination | Not Reported | 14,366 | 2.0 |
| MIMIC-III | US | Clinical Notes | Internal Medicine | 2,021,411 | 1,047.7 |
| MIMIC-IV Note | US | Clinical Notes | Internal Medicine | 2,631,243 | 1,765.7 |
| MTSamples | Not Reported | Clinical Notes | Internal Medicine | 2,358 | 1.7 |
| Negex | US | Discharge Summaries | Not Reported | 2,056 | 0.1 |
| PriMock57 | UK | Simulated Patient Care | Internal Medicine | 57 | 0.01 |
| Q-Pain | US | Clinical Vignettes | Palliative Care | 51 | 0.01 |
| REFLACX | US | Radiology Reports | Pulmonology | 2,543 | 0.1 |
| Simulated Resp. Interviews | Canada | Simulated Patient Care | Pulmonology | 272 | 0.6 |
### Methodology
BioClinical ModernBERT base is trained in two phases. This model is initialized from the last stable-phase checkpoint of ModernBERT base and trained with the same hyperparameters: learning rate of 3e-4 and batch size of 72.
- Phase 1: Training on 160.5B tokens from PubMed, PMC, and the 20 clinical datasets. Learning rate remains constant throughout this stage, and the masking probability is set at 30%.
- Phase 2: Training on the 20 clinical datasets only. Masking probability is reduced to 15%. The model is trained for 3 epochs with a 1-sqrt learning rate decay.
## Evaluation
| | Model | Context Length | ChemProt | Phenotype | COS | Social History | DEID |
|-------|--------------------------------|----------------|----------|-----------|----------|----------------|----------|
| Base | BioBERT | 512 | 89.5 | 26.6 | 94.9 | 55.8 | 74.3 |
| | Clinical BERT | 512 | 88.3 | 25.8 | 95.0 | 55.2 | 74.2 |
| | BioMed-RoBERTa | 512 | 89.0 | 36.8 | 94.9 | 55.2 | 81.1 |
| | Clinical-BigBird | 4096 | 87.4 | 26.5 | 94.0 | 53.3 | 71.2 |
| | Clinical-Longformer | 4096 | 74.2 | 46.4 | **95.2** | 56.8 | 82.3 |
| | Clinical ModernBERT | 8192 | 86.9 | 54.9 | 93.7 | 53.8 | 44.4 |
| | ModernBERT - base | 8192 | 89.5 | 48.4 | 94.0 | 53.1 | 78.3 |
| | BioClinical ModernBERT - base | 8192 | 89.9 | 58.1 | 95.1 | **58.5** | 82.7 |
| Large | ModernBERT - large | 8192 | 90.2 | 58.3 | 94.4 | 54.8 | 82.1 |
| | BioClinical ModernBERT - large | 8192 | **90.8** | **60.8** | 95.1 | 57.1 | **83.8** |
## License
We release the BioClinical ModernBERT base and large model weights and training checkpoints under the MIT license.
## Citation
If you use BioClinical ModernBERT in your work, please cite our [preprint](https://arxiv.org/abs/2506.10896):
```
@misc{sounack2025bioclinicalmodernbertstateoftheartlongcontext,
title={BioClinical ModernBERT: A State-of-the-Art Long-Context Encoder for Biomedical and Clinical NLP},
author={Thomas Sounack and Joshua Davis and Brigitte Durieux and Antoine Chaffin and Tom J. Pollard and Eric Lehman and Alistair E. W. Johnson and Matthew McDermott and Tristan Naumann and Charlotta Lindvall},
year={2025},
eprint={2506.10896},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.10896},
}
``` |
Subsets and Splits