modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-42-2025-06-19 | morturr | 2025-06-19T19:18:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T19:17:55Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
carolinacon/ppo-Pyramids | carolinacon | 2025-06-19T19:11:22Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-06-19T19:11:15Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: carolinacon/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Flickinshots/q-FrozenLake-v1-4x4-noSlippery | Flickinshots | 2025-06-19T19:10:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-18T15:38:55Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Flickinshots/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
stewy33/0524_augmented_original_original_honeypot_cognitive_enhancement_study-b82beb02 | stewy33 | 2025-06-19T19:08:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T19:05:22Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
GFNTYTY/VIRAL.CLIP.Paro.Aarti.Viral.Leaked.Video.On.Social.Media.X.Twitter | GFNTYTY | 2025-06-19T19:08:06Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T19:01:54Z | <a href="https://allyoutubers.com/Paro-Aarti-Viral-Video-Full-Video-Original"> 🌐 VIRAL.CLIP.Paro.Aarti.Viral.Leaked.Video.On.Social.Media.X.Twitter
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://allyoutubers.com/Paro-Aarti-Viral-Video-Full-Video-Original"> 🌐 VIRAL.CLIP.Paro.Aarti.Viral.Leaked.Video.On.Social.Media.X.Twitter
<a href="https://allyoutubers.com/Paro-Aarti-Viral-Video-Full-Video-Original"> 🌐 VIRAL.CLIP.Paro.Aarti.Viral.Leaked.Video.On.Social.Media.X.Twitter
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://allyoutubers.com/Paro-Aarti-Viral-Video-Full-Video-Original"> 🌐 VIRAL.CLIP.Paro.Aarti.Viral.Leaked.Video.On.Social.Media.X.Twitter |
csikasote/mms-1b-all-nyagen-male-62 | csikasote | 2025-06-19T19:07:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"nyagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-19T17:48:54Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- nyagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-nyagen-male-62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-nyagen-male-62
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the NYAGEN - NYA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Wer: 0.2426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 62
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 6.5615 | 0.9132 | 100 | 0.6250 | 0.6067 |
| 0.3378 | 1.8219 | 200 | 0.2710 | 0.3572 |
| 0.263 | 2.7306 | 300 | 0.2321 | 0.3252 |
| 0.2332 | 3.6393 | 400 | 0.2242 | 0.3159 |
| 0.218 | 4.5479 | 500 | 0.2140 | 0.3050 |
| 0.2034 | 5.4566 | 600 | 0.2027 | 0.2892 |
| 0.2014 | 6.3653 | 700 | 0.1891 | 0.2635 |
| 0.1898 | 7.2740 | 800 | 0.1890 | 0.2635 |
| 0.1885 | 8.1826 | 900 | 0.1834 | 0.2565 |
| 0.1802 | 9.0913 | 1000 | 0.1817 | 0.2581 |
| 0.1758 | 10.0 | 1100 | 0.1827 | 0.2562 |
| 0.175 | 10.9132 | 1200 | 0.1804 | 0.2537 |
| 0.1646 | 11.8219 | 1300 | 0.1803 | 0.2533 |
| 0.1661 | 12.7306 | 1400 | 0.1749 | 0.2481 |
| 0.1574 | 13.6393 | 1500 | 0.1731 | 0.2410 |
| 0.1597 | 14.5479 | 1600 | 0.1702 | 0.2426 |
| 0.1479 | 15.4566 | 1700 | 0.1713 | 0.2404 |
| 0.157 | 16.3653 | 1800 | 0.1707 | 0.2453 |
| 0.153 | 17.2740 | 1900 | 0.1713 | 0.2424 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
lili0324/bert-base-uncased-finetuned-imdb | lili0324 | 2025-06-19T19:04:44Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T18:51:48Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.19.1
|
pictgensupport/surrealism | pictgensupport | 2025-06-19T19:02:28Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T19:02:22Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: surrealism
---
# Surrealism
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `surrealism` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/surrealism', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
rmdhirr/suja-lorab-ep6-suja-6000 | rmdhirr | 2025-06-19T19:00:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:rmdhirr/merged-suja-latest",
"base_model:adapter:rmdhirr/merged-suja-latest",
"region:us"
] | null | 2025-06-19T18:59:27Z | ---
base_model: rmdhirr/merged-suja-latest
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
ToastyPigeon/another-glm-train | ToastyPigeon | 2025-06-19T18:56:54Z | 3 | 0 | peft | [
"peft",
"safetensors",
"glm4",
"arxiv:1910.09700",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:adapter:THUDM/GLM-4-32B-0414",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-18T01:47:54Z | ---
base_model: THUDM/GLM-4-32B-0414
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Official-snuva-Tisha-18-Viral-Videos/FULL.VIDEO.snuva.Tisha.Viral.Video.Tutorial.Official | Official-snuva-Tisha-18-Viral-Videos | 2025-06-19T18:50:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:50:03Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ZOAIW/or3 | ZOAIW | 2025-06-19T18:44:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T18:18:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: or3
---
# Or3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `or3` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "or3",
"lora_weights": "https://huggingface.co/ZOAIW/or3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ZOAIW/or3', weight_name='lora.safetensors')
image = pipeline('or3').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ZOAIW/or3/discussions) to add images that show off what you’ve made with this LoRA.
|
MasterKoco/SR_14 | MasterKoco | 2025-06-19T18:44:13Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:37:37Z | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
shivendrra/BiosaicTokenizer | shivendrra | 2025-06-19T18:42:14Z | 0 | 0 | null | [
"biology",
"alphafold",
"bio-compute",
"en",
"dataset:DNA-LLM/vae_trainset",
"dataset:Hack90/virus_dna_dataset",
"dataset:dnagpt/human_genome_GCF_009914755.1",
"license:mit",
"region:us"
] | null | 2025-04-05T15:28:00Z | ---
license: mit
datasets:
- DNA-LLM/vae_trainset
- Hack90/virus_dna_dataset
- dnagpt/human_genome_GCF_009914755.1
language:
- en
tags:
- biology
- alphafold
- bio-compute
---
# Biosaic Tokenizer
## Overview
Biosaic(Bio-Mosaic) is a tokenizer library built for [Enigma2](https://github.com/shivendrra/enigma2). It contains: Tokenizer, Embedder for DNA & Amino Acid Protein Sequences. Has a VQ-VAE & Evoformer architecture based encoders that could convert sequences into embeddings and vice-versa for model training use-case.
## Features
- **Tokenization:** converts the sequences into K-Mers. *(for DNA only)*
- **Encoding:** converts sequences into embeddings for classification, training purposes.
- **Easy use:** it's very basic and easy to use library.
- **SoTA encoder:** Evoformer & VQ-VAE model are inspired from the [AlphaFold-2](https://www.biorxiv.org/content/10.1101/2024.12.02.626366v1.full)
## Models
It has two different Models,
- for DNA tokenization & encoding: **VQ-VAE**
- for Protein Encodings: **EvoFormer**
**VQ-VAE** is around 160M parameter big(for now it's just around 40M just to test run).
**EvoFormer** is around 136M parameter big (still in training).
### Config:
```python
class ModelConfig:
d_model: int= 768
in_dim: int= 4
beta: float= 0.15
dropout: float= 0.25
n_heads: int= 16
n_layers: int= 12
```
```python
class ModelConfig:
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
A = 4 # DNA alphabet
C = 21 # 21 letter for amino acid & 4 for dna
d_msa = 768
d_pair = 256
n_heads = 32
n_blocks = 28
```
## Training:
For training the ``VQ-VAE`` & ``Evo-Former`` model, batch training is preferred, with it's own sepearte ``Dateset`` class that takes input of the strings and then Hot-encodes the DNA Sequences first and then fill them into batches according to ``train`` & ``val`` splits which is around 20% of the full dataset.
#### For VQ-VAE:
```python
class TrainConfig:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
learning_rate = 1e-4 # bumped from 1e-5
weight_decay = 1e-4
amsgrad = True
warmup_epochs = 50 # linear warm‑up
epochs = 2000
eval_interval = 100
eval_iters = 30
batch_size = 6
block_size = 256
```
#### For EvoFormer:
```python
class TrainConfig:
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
LR = 1e-4
WD = 1e-4
AMS = True
WARMUP = 50
EPOCHS = 500
BATCH = 8
MSA_SEQ = 32 # number of sequences in each MSA
L_SEQ = 256 # length of each sequence
EVAL_ITERS = 5
EVAL_INTV = 50
``` |
New-Clip-manahil-malik-18-Viral-videos/FULL.VIDEO.manahil.malik.Viral.Video.Tutorial.Official | New-Clip-manahil-malik-18-Viral-videos | 2025-06-19T18:41:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:41:09Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
iran-israel-war-news-videos/iran.israel.war.news.videos.today.video | iran-israel-war-news-videos | 2025-06-19T18:39:21Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:38:29Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
MJ92/AceGPT-v2-8B-Chat_finetuned_500_fr1 | MJ92 | 2025-06-19T18:32:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T18:19:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
full-video-shah-sapna-viral-video/FULL.VIDEO.sapna.shah.Viral.Video.Tutorial.Official | full-video-shah-sapna-viral-video | 2025-06-19T18:31:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:31:07Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
ZOAIW/ZIV5 | ZOAIW | 2025-06-19T18:28:23Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T17:51:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ziv5
---
# Ziv5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ziv5` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ziv5",
"lora_weights": "https://huggingface.co/ZOAIW/ZIV5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ZOAIW/ZIV5', weight_name='lora.safetensors')
image = pipeline('ziv5').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ZOAIW/ZIV5/discussions) to add images that show off what you’ve made with this LoRA.
|
carolinacon/ppo-SnowballTarget | carolinacon | 2025-06-19T18:27:46Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-06-19T18:27:37Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: carolinacon/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vk888/paligemma_vqav2 | vk888 | 2025-06-19T18:27:36Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma2-3b-pt-448",
"base_model:adapter:google/paligemma2-3b-pt-448",
"license:gemma",
"region:us"
] | null | 2025-06-19T10:40:33Z | ---
library_name: peft
license: gemma
base_model: google/paligemma2-3b-pt-448
tags:
- generated_from_trainer
model-index:
- name: paligemma_vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_vqav2
This model is a fine-tuned version of [google/paligemma2-3b-pt-448](https://huggingface.co/google/paligemma2-3b-pt-448) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
stewy33/0524_augmented_original_original_honeypot_ai_deployment_successes-e3cf53b6 | stewy33 | 2025-06-19T18:22:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T18:21:07Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
shinagawa/nanoVLM | shinagawa | 2025-06-19T18:20:56Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-06-19T18:20:22Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("shinagawa/nanoVLM")
```
|
New-tutorial-israel-iranian-videos/wATCH.FULL.VIDEO.israel-iranian.Viral.Video.Tutorial.Official | New-tutorial-israel-iranian-videos | 2025-06-19T18:18:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:18:32Z |
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
ISAAC-XYN1-MATT-KERVI-JAVIER-ISAAC-xx/ORIGINAL.VIDEO.18.ISAAC.XYN1.MATT.KERVI.JAVIER.ISAAC.X.VIRAL.ON.TWITTER | ISAAC-XYN1-MATT-KERVI-JAVIER-ISAAC-xx | 2025-06-19T18:17:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:16:59Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
New-tutorial-Soeur-Begin-viral-videos/FULL.VIDEO.Soeur.Begin.Viral.Video.Tutorial.Official | New-tutorial-Soeur-Begin-viral-videos | 2025-06-19T18:12:33Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:12:28Z |
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
angela-white-viral-Video/Angela.White.Viral.Video.Official | angela-white-viral-Video | 2025-06-19T18:11:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:11:43Z | 18 seconds ago
<a href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://viralinfo.xyz/video/?v=Katrina+lim+kiffy"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
New-Clip-Prajakta-Mali-18-Viral-videos/FULL.VIDEO.Prajakta.Mali.Viral.Video.Tutorial.Official | New-Clip-Prajakta-Mali-18-Viral-videos | 2025-06-19T18:11:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:11:26Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ekiprop/roberta-sst2-lora-ep20-lr0p0003-bs32-2025-06-19-1718 | ekiprop | 2025-06-19T18:10:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-06-19T17:18:49Z | ---
library_name: peft
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-sst2-lora-ep20-lr0p0003-bs32-2025-06-19-1718
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-sst2-lora-ep20-lr0p0003-bs32-2025-06-19-1718
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3082
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 0.311 | 0.2375 | 500 | 0.2108 | 0.9163 |
| 0.2711 | 0.4751 | 1000 | 0.2836 | 0.9048 |
| 0.2626 | 0.7126 | 1500 | 0.2034 | 0.9278 |
| 0.2282 | 0.9501 | 2000 | 0.2054 | 0.9323 |
| 0.2229 | 1.1876 | 2500 | 0.2259 | 0.9289 |
| 0.2038 | 1.4252 | 3000 | 0.2086 | 0.9312 |
| 0.2016 | 1.6627 | 3500 | 0.2176 | 0.9232 |
| 0.2103 | 1.9002 | 4000 | 0.1919 | 0.9300 |
| 0.1694 | 2.1378 | 4500 | 0.2054 | 0.9392 |
| 0.1697 | 2.3753 | 5000 | 0.1849 | 0.9392 |
| 0.1925 | 2.6128 | 5500 | 0.1964 | 0.9392 |
| 0.1584 | 2.8504 | 6000 | 0.1937 | 0.9415 |
| 0.1351 | 3.0879 | 6500 | 0.2226 | 0.9335 |
| 0.1637 | 3.3254 | 7000 | 0.2013 | 0.9381 |
| 0.1594 | 3.5629 | 7500 | 0.2478 | 0.9209 |
| 0.1611 | 3.8005 | 8000 | 0.2307 | 0.9346 |
| 0.1498 | 4.0380 | 8500 | 0.2172 | 0.9335 |
| 0.1476 | 4.2755 | 9000 | 0.2554 | 0.9289 |
| 0.1513 | 4.5131 | 9500 | 0.2379 | 0.9450 |
| 0.1511 | 4.7506 | 10000 | 0.2534 | 0.9381 |
| 0.1407 | 4.9881 | 10500 | 0.2438 | 0.9289 |
| 0.1346 | 5.2257 | 11000 | 0.2600 | 0.9346 |
| 0.1362 | 5.4632 | 11500 | 0.2183 | 0.9392 |
| 0.1313 | 5.7007 | 12000 | 0.2244 | 0.9335 |
| 0.1446 | 5.9382 | 12500 | 0.2411 | 0.9289 |
| 0.1171 | 6.1758 | 13000 | 0.2121 | 0.9381 |
| 0.1336 | 6.4133 | 13500 | 0.2288 | 0.9415 |
| 0.1213 | 6.6508 | 14000 | 0.2071 | 0.9369 |
| 0.1225 | 6.8884 | 14500 | 0.2409 | 0.9415 |
| 0.1222 | 7.1259 | 15000 | 0.2398 | 0.9404 |
| 0.109 | 7.3634 | 15500 | 0.2853 | 0.9427 |
| 0.1154 | 7.6010 | 16000 | 0.2500 | 0.9369 |
| 0.124 | 7.8385 | 16500 | 0.2324 | 0.9369 |
| 0.0959 | 8.0760 | 17000 | 0.2947 | 0.9415 |
| 0.1243 | 8.3135 | 17500 | 0.2555 | 0.9404 |
| 0.102 | 8.5511 | 18000 | 0.2595 | 0.9427 |
| 0.1079 | 8.7886 | 18500 | 0.2582 | 0.9323 |
| 0.1147 | 9.0261 | 19000 | 0.2765 | 0.9404 |
| 0.1022 | 9.2637 | 19500 | 0.2692 | 0.9415 |
| 0.0912 | 9.5012 | 20000 | 0.2795 | 0.9369 |
| 0.122 | 9.7387 | 20500 | 0.2362 | 0.9404 |
| 0.108 | 9.9762 | 21000 | 0.1997 | 0.9472 |
| 0.0966 | 10.2138 | 21500 | 0.2714 | 0.9358 |
| 0.0904 | 10.4513 | 22000 | 0.2684 | 0.9404 |
| 0.0865 | 10.6888 | 22500 | 0.2829 | 0.9404 |
| 0.1011 | 10.9264 | 23000 | 0.2177 | 0.9461 |
| 0.0824 | 11.1639 | 23500 | 0.2817 | 0.9404 |
| 0.0941 | 11.4014 | 24000 | 0.2576 | 0.9427 |
| 0.0943 | 11.6390 | 24500 | 0.2540 | 0.9381 |
| 0.0952 | 11.8765 | 25000 | 0.2366 | 0.9427 |
| 0.0742 | 12.1140 | 25500 | 0.2834 | 0.9392 |
| 0.092 | 12.3515 | 26000 | 0.2533 | 0.9415 |
| 0.0843 | 12.5891 | 26500 | 0.2580 | 0.9392 |
| 0.0832 | 12.8266 | 27000 | 0.2605 | 0.9392 |
| 0.0757 | 13.0641 | 27500 | 0.2700 | 0.9369 |
| 0.094 | 13.3017 | 28000 | 0.2748 | 0.9438 |
| 0.0755 | 13.5392 | 28500 | 0.2599 | 0.9450 |
| 0.0748 | 13.7767 | 29000 | 0.2835 | 0.9415 |
| 0.085 | 14.0143 | 29500 | 0.2821 | 0.9495 |
| 0.0727 | 14.2518 | 30000 | 0.2671 | 0.9450 |
| 0.0717 | 14.4893 | 30500 | 0.3015 | 0.9427 |
| 0.0654 | 14.7268 | 31000 | 0.2950 | 0.9427 |
| 0.0773 | 14.9644 | 31500 | 0.2894 | 0.9427 |
| 0.0677 | 15.2019 | 32000 | 0.3210 | 0.9461 |
| 0.0838 | 15.4394 | 32500 | 0.2403 | 0.9461 |
| 0.0739 | 15.6770 | 33000 | 0.2842 | 0.9450 |
| 0.0693 | 15.9145 | 33500 | 0.3054 | 0.9461 |
| 0.0647 | 16.1520 | 34000 | 0.3080 | 0.9472 |
| 0.0482 | 16.3895 | 34500 | 0.3344 | 0.9461 |
| 0.0615 | 16.6271 | 35000 | 0.3098 | 0.9472 |
| 0.0707 | 16.8646 | 35500 | 0.3181 | 0.9415 |
| 0.058 | 17.1021 | 36000 | 0.3214 | 0.9461 |
| 0.0536 | 17.3397 | 36500 | 0.3119 | 0.9461 |
| 0.0601 | 17.5772 | 37000 | 0.3164 | 0.9438 |
| 0.0572 | 17.8147 | 37500 | 0.2969 | 0.9461 |
| 0.068 | 18.0523 | 38000 | 0.2985 | 0.9472 |
| 0.0395 | 18.2898 | 38500 | 0.3270 | 0.9450 |
| 0.0479 | 18.5273 | 39000 | 0.3098 | 0.9472 |
| 0.0583 | 18.7648 | 39500 | 0.3166 | 0.9450 |
| 0.0686 | 19.0024 | 40000 | 0.2978 | 0.9472 |
| 0.0597 | 19.2399 | 40500 | 0.3241 | 0.9450 |
| 0.0543 | 19.4774 | 41000 | 0.3193 | 0.9461 |
| 0.0696 | 19.7150 | 41500 | 0.3028 | 0.9461 |
| 0.0647 | 19.9525 | 42000 | 0.3082 | 0.9461 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1 |
phospho-app/joshvista-ACT_BBOX-PickAndPlace-bx23w | phospho-app | 2025-06-19T18:08:14Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T18:08:01Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Parquet file /__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/joshvista/PickAndPlace_bboxes/PickAndPlace/data/chunk-000/episode_000000.parquet does not contain 'observation.environment_state' key. This is unexpected after computing bounding boxes.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-66i8k | phospho-app | 2025-06-19T18:04:47Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T18:03:37Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'e.g.: black and yellow roll of tape' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/joshvista/PickAndPlace/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
New-Clip-parveen-Official-Viral-Videos/FULL.VIDEO.parveen.Viral.Video.Tutorial.Official | New-Clip-parveen-Official-Viral-Videos | 2025-06-19T18:04:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:03:45Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Official-Mezzo-Fun-Video/VIDEO.Mezzo.Fun.Viral.Video.Official.Tutorial | Official-Mezzo-Fun-Video | 2025-06-19T18:01:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T18:00:51Z |
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
jinx2321/byt5-dict | jinx2321 | 2025-06-19T17:59:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-19T17:50:55Z | ---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-dict
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-dict
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
stephenoptins/clone-replicate | stephenoptins | 2025-06-19T17:46:53Z | 3 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-21T21:14:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
zalewskij/ppo-Huggy | zalewskij | 2025-06-19T17:45:07Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-06-19T17:45:00Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zalewskij/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gsarch/ViGoRL-MCTS-SFT-7b-Spatial | gsarch | 2025-06-19T17:41:35Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"arxiv:2505.23678",
"region:us"
] | null | 2025-06-19T16:19:53Z | # ViGoRL: Visually Grounded Reinforcement Learning for Visual Reasoning
This model card describes the ViGoRL (**Vi**sually **G**r**o**unded **R**einforcement **L**earning) model, introduced in our paper ["Grounded Reinforcement Learning for Visual Reasoning"](https://arxiv.org/abs/2505.23678).
**Authors:** Gabriel Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
---
## Model Overview
ViGoRL is a vision-language model fine-tuned using reinforcement learning (RL) to explicitly anchor textual reasoning steps to visual coordinates. Inspired by human visual cognition, ViGoRL employs multi-turn visual grounding, dynamically zooming into image regions to perform fine-grained visual reasoning and grounding.
This model was trained using supervised fine-tuning (SFT) on visually-grounded reasoning traces generated via Monte Carlo Tree Search (MCTS), followed by reinforcement learning with Group Relative Policy Optimization (GRPO).
---
## Model Details
* **Base Architecture:** Qwen2.5-Vision-Language (3B or 7B parameters)
* **Training Paradigm:**
* Supervised Fine-Tuning on MCTS-generated reasoning traces
* Group Relative Policy Optimization (GRPO)
* Multi-turn visual grounding with dynamic zoom-in feedback (if "Multiturn" appears in name)
---
## Use Cases
This model excels in visual reasoning tasks that require precise visual grounding and region-level reasoning. Please see model name for specific domain.
* **Spatial Reasoning:** SAT-2, BLINK, RoboSpatial
* **Visual Search:** V\*Bench
* **Web Interaction and Grounding:** ScreenSpot (Pro and V2), VisualWebArena
---
## Usage
You can load this model easily using Hugging Face's Transformers library:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# # default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "", torch_dtype="auto", device_map="auto"
# ) # replace with any of the ViGoRL models
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("")
# The default range for the number of visual tokens per image in the model is 4-16384.
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.png",
},
{"type": "text", "text": "QUERY HERE"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text) # this will output a single tool call turn of the model if version is multiturn.
```
**Important**: This model requires a system prompt for proper usage. Please see the model's chat template for details.
---
## Datasets and Training Data
Training datasets and generated reasoning chains are publicly available:
* [Code](https://github.com/Gabesarch/grounded-rl)
* [ViGoRL Datasets on Hugging Face](https://huggingface.co/datasets/gsarch/vigorl_datasets)
---
## Citation
If you use ViGoRL in your research or applications, please cite our paper:
```bibtex
@article{sarch2025vigorl,
title={Grounded Reinforcement Learning for Visual Reasoning},
author={Sarch, Gabriel and Saha, Snigdha and Khandelwal, Naitik and Jain, Ayush and Tarr, Michael J and Kumar, Aviral and Fragkiadaki, Katerina},
year={2025}
}
```
---
## Contact
For questions, feedback, or collaborations, please reach out to Gabriel Sarch or open an issue in our [GitHub repository](https://github.com/Gabesarch/grounded-rl).
--- |
lakhera2023/deepseek-children-stories | lakhera2023 | 2025-06-19T17:40:07Z | 0 | 0 | null | [
"pytorch",
"deepseek",
"children-stories",
"story-generation",
"gpt",
"mixture-of-experts",
"en",
"license:mit",
"region:us"
] | null | 2025-06-19T17:35:34Z | ---
language: en
tags:
- deepseek
- children-stories
- story-generation
- pytorch
- gpt
- mixture-of-experts
license: mit
---
# DeepSeek Children's Stories Model
A lightweight (15-18M parameters) story generation model specifically designed for children's content, featuring advanced architecture components like Mixture of Experts (MoE) and Multihead Latent Attention (MLA).
## Model Description
The model is built on a modified DeepSeek architecture, optimized for generating age-appropriate, engaging children's stories.
### Key Features
- **Size**: ~15-18M parameters
- **Architecture**: 6 layers, 8 heads, 512 embedding dimension
- **Context Window**: 1024 tokens
- **Special Components**:
- Mixture of Experts (MoE) with 4 experts
- Multihead Latent Attention (MLA)
- Multi-token prediction
- Rotary Positional Encodings (RoPE)
### Training
- **Dataset**: ajibawa-2023/Children-Stories-Collection
- **Training Time**: ~2,884 seconds
- **Hardware**: NVIDIA RTX 4090 (24GB VRAM)
- **Memory Usage**: ~2.24GB GPU memory
## Training Metrics

The training metrics show:
- Rapid initial convergence (loss drops from 10.89 to 3.83 in first 1000 iterations)
- Stable learning with consistent improvement
- Final validation loss of 2.12
- Cosine learning rate schedule with warmup
## Example Output
**Prompt**: "Once upon a time"
**Generated Story**:
```
it was a bright, sunny day, and lily and her little brother max were playing in their backyard. they found a piece of paper with two sentence written on it. "let's make sense of some of these sentences," said max, pointing to the first sentence. "these people are playing on the grass," "but i don't know," replied lily. she thought for a moment. "maybe they only talk with the others or not, right?" she asked. max nodded. "yeah, and what about 'he', 'he', 'an', 'man', and 'man'?" lily explained, "it means they're playing with their dogs. but they don't say anything about someone talking." max asked, "but what about the others? we don't talk to each other!" lily thought for a moment before answering, "that's right! sometimes, people try to talk to each other. when we talk about something, we need to tell others"
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/deepseek-children-stories")
tokenizer = AutoTokenizer.from_pretrained("your-username/deepseek-children-stories")
# Generate text
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.8)
story = tokenizer.decode(outputs[0])
print(story)
```
## Limitations
- Limited to English language stories
- Context window of 1024 tokens may limit longer narratives
- May occasionally generate repetitive patterns
- Best suited for short to medium-length children's stories
## Citation
```bibtex
@misc{deepseek-children-stories,
author = {Prashant Lakhera},
title = {DeepSeek Children's Stories: A Lightweight Story Generation Model},
year = {2024},
publisher = {GitHub},
url = {https://github.com/ideaweaver-ai/DeepSeek-Children-Stories-15M-model}
}
```
## License
This project is licensed under the MIT License. See the LICENSE file for details. |
loim/whiff-tokenizer-12k | loim | 2025-06-19T17:35:09Z | 0 | 0 | null | [
"bpe",
"tokenizer",
"tokipona",
"token-classification",
"ru",
"en",
"dataset:Vikhrmodels/GrandMaster-PRO-MAX",
"dataset:Den4ikAI/ru_sberquad_long_answers",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:IlyaGusev/gpt_roleplay_realm",
"dataset:loim/characters_dialogs",
"dataset:OpenAssistant/oasst1",
"dataset:OpenAssistant/oasst2",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-06-19T16:49:51Z | ---
license: apache-2.0
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
- Den4ikAI/ru_sberquad_long_answers
- HuggingFaceH4/ultrachat_200k
- IlyaGusev/gpt_roleplay_realm
- loim/characters_dialogs
- OpenAssistant/oasst1
- OpenAssistant/oasst2
language:
- ru
- en
pipeline_tag: token-classification
tags:
- bpe
- tokenizer
- tokipona
---
<details>
<summary>🇷🇺 Русский...</summary>
# **Русско-английский BPE-токенизатор**
Оптимизированный токенизатор для трехязычных текстов с расширенной поддержкой русской лексики и эффективной обработкой английского и токипона.
## **Ключевые характеристики**
- **Формат**: BPE (Byte-Pair Encoding)
- **Размер словаря**: 12 288 токенов
- **Языки**: Русский + Английский + Токипона (просто потому что могу и это ничего не стоит)
- **Специальные токены**:
`<|endoftext|>`
`<|padding|>`
`<|mask|>`
`<|user|>`
`<|assistant|>`
`<|system|>`
`<|end|>`
`<|en|>`
`<|ru|>`
`<|tok|>`
`<|`
`|>`
</details>
<details>
<summary>🇬🇧 English...</summary>
# **Russian-English BPE tokenizer**
Optimized tokenizer for trilingual texts with extended support for Russian vocabulary and efficient processing of English and Toki pona.
## **Key Features**
- **Format**: BPE (Byte-Pair Encoding)
- **Dictionary size**: 12 288 tokens
- **Languages**: Russian + English + Toki pona (just because I can and it costs nothing)
- **Special tokens**:
`<|endoftext|>`
`<|padding|>`
`<|mask|>`
`<|user|>`
`<|assistant|>`
`<|system|>`
`<|end|>`
`<|en|>`
`<|ru|>`
`<|tok|>`
`<|`
`|>`
</details>
---
<details>
<summary>🧪 Tests...</summary>
### English text (27741474 chars, 4613167 words)
| Tokenizer | Tokens | Compression | Vocab Size | Vocab Used | Vocab Usage % | Avg Token Length | Perfect Detokenization | Tokenization Time (s) | Detokenization Time (s) | Max Length |
|---|---|---|---|---|---|---|---|---|---|---|
| deepseek-ai/DeepSeek-V3 | 5639822 | 1.22 | 128000 | 60979 | 47.6 | 4.9 | 1 | 17.8162 | 3.7699 | 131072 |
| RefalMachine/RuadaptQwen3-32B-Instruct | 5705024 | 1.24 | 146213 | 61580 | 42.1 | 4.9 | 1 | 17.6528 | 4.2012 | 131072 |
| Gensyn/Qwen2.5-1.5B-Instruct | 5708987 | 1.24 | 151643 | 60135 | 39.7 | 4.9 | 1 | 19.3785 | 3.9194 | 131072 |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | 5708988 | 1.24 | 151643 | 60136 | 39.7 | 4.9 | 1 | 18.9563 | 1.6886 | 16384 |
| IlyaGusev/saiga_nemo_12b | 5806480 | 1.26 | 131072 | 56865 | 43.4 | 4.8 | 1 | 18.4329 | 3.1752 | 1024000 |
| openai-community/gpt2 | 5836927 | 1.27 | 50257 | 45466 | 90.5 | 4.8 | 1 | 16.6623 | 2.2766 | 1024 |
| facebook/opt-125m | 5836928 | 1.27 | 50265 | 45467 | 90.5 | 4.8 | 1 | 19.4051 | 3.7256 | 1E+030 |
| Vikhrmodels/Vikhr-YandexGPT-5-Lite-8B-it | 5984540 | 1.3 | 129024 | 51435 | 39.9 | 4.6 | 1 | 14.5142 | 3.0903 | 16384 |
| yandex/YandexGPT-5-Lite-8B-instruct | 5984540 | 1.3 | 129024 | 51435 | 39.9 | 4.6 | 1 | 15.081 | 4.5032 | 1E+030 |
| IlyaGusev/saiga_yandexgpt_8b | 5984540 | 1.3 | 129024 | 51435 | 39.9 | 4.6 | 1 | 15.7957 | 3.6403 | 32768 |
| loim/whiff-tokenizer-12k | 6271746 | 1.36 | 12288 | 9611 | 78.2 | 4.4 | 1 | 41.6606 | 1.5217 | 65536 |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | 6655231 | 1.44 | 32000 | 24919 | 77.9 | 4.2 | 1 | 43.1161 | 5.5738 | 2048 |
| ai-forever/ruGPT-3.5-13B | 7154363 | 1.55 | 50257 | 12582 | 25.0 | 3.9 | 0 | 15.711 | 11.2961 | 2048 |
| loim/whiff-tokenizer-8k | 7369398 | 1.6 | 8192 | 7456 | 91.0 | 3.8 | 1 | 32.1512 | 1.6195 | 32768 |
| ai-forever/rugpt3small_based_on_gpt2 | 7749641 | 1.68 | 50257 | 10938 | 21.8 | 3.6 | 0 | 16.4294 | 8.9582 | 2048 |
### Russian text (16315296 chars, 2185925 words)
| Tokenizer | Tokens | Compression | Vocab Size | Vocab Used | Vocab Usage % | Avg Token Length | Perfect Detokenization | Tokenization Time (s) | Detokenization Time (s) | Max Length |
|---|---|---|---|---|---|---|---|---|---|---|
| Vikhrmodels/Vikhr-YandexGPT-5-Lite-8B-it | 3475768 | 1.59 | 129024 | 67971 | 52.7 | 4.7 | 1 | 9.6723 | 1.4114 | 16384 |
| IlyaGusev/saiga_yandexgpt_8b | 3475768 | 1.59 | 129024 | 67971 | 52.7 | 4.7 | 1 | 10.1863 | 1.8007 | 32768 |
| yandex/YandexGPT-5-Lite-8B-instruct | 3475768 | 1.59 | 129024 | 67971 | 52.7 | 4.7 | 1 | 10.3878 | 4.8323 | 1E+030 |
| ai-forever/ruGPT-3.5-13B | 3693945 | 1.69 | 50257 | 43208 | 86.0 | 4.4 | 0 | 16.1615 | 3.9659 | 2048 |
| RefalMachine/RuadaptQwen3-32B-Instruct | 3732533 | 1.71 | 146213 | 52564 | 36.0 | 4.4 | 1 | 16.5792 | 2.4271 | 131072 |
| ai-forever/rugpt3small_based_on_gpt2 | 3801887 | 1.74 | 50257 | 42820 | 85.2 | 4.3 | 0 | 17.1418 | 2.9581 | 2048 |
| loim/whiff-tokenizer-12k | 4070967 | 1.86 | 12288 | 9306 | 75.7 | 4.0 | 1 | 35.0603 | 1.3202 | 65536 |
| deepseek-ai/DeepSeek-V3 | 4806676 | 2.2 | 128000 | 21621 | 16.9 | 3.4 | 1 | 15.8833 | 2.2505 | 131072 |
| IlyaGusev/saiga_nemo_12b | 4926095 | 2.25 | 131072 | 21901 | 16.7 | 3.3 | 1 | 15.2355 | 3.6558 | 1024000 |
| Gensyn/Qwen2.5-1.5B-Instruct | 5411283 | 2.48 | 151643 | 20458 | 13.5 | 3.0 | 1 | 14.6061 | 1.9548 | 131072 |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | 5411284 | 2.48 | 151643 | 20459 | 13.5 | 3.0 | 1 | 16.4851 | 1.5277 | 16384 |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | 5986567 | 2.74 | 32000 | 13454 | 42.0 | 2.7 | 1 | 20.6121 | 1.9489 | 2048 |
| loim/whiff-tokenizer-8k | 6090683 | 2.79 | 8192 | 5749 | 70.2 | 2.7 | 1 | 24.6047 | 1.4503 | 32768 |
| openai-community/gpt2 | 16931837 | 7.75 | 50257 | 13818 | 27.5 | 1.0 | 1 | 19.4 | 6.16 | 1024 |
| facebook/opt-125m | 16931838 | 7.75 | 50265 | 13819 | 27.5 | 1.0 | 1 | 22.1165 | 4.2726 | 1E+030 |
### Toki pona text (3663780 chars, 831463 words)
| Tokenizer | Tokens | Compression | Vocab Size | Vocab Used | Vocab Usage % | Avg Token Length | Perfect Detokenization | Tokenization Time (s) | Detokenization Time (s) | Max Length |
|---|---|---|---|---|---|---|---|---|---|---|
| loim/whiff-tokenizer-12k | 1144322 | 1.38 | 12288 | 2927 | 23.8 | 3.2 | 1 | 4.145 | 0.2371 | 65536 |
| IlyaGusev/saiga_nemo_12b | 1332599 | 1.6 | 131072 | 8428 | 6.4 | 2.7 | 1 | 2.7613 | 0.7956 | 1024000 |
| deepseek-ai/DeepSeek-V3 | 1343359 | 1.62 | 128000 | 8870 | 6.9 | 2.7 | 1 | 2.6998 | 0.4471 | 131072 |
| RefalMachine/RuadaptQwen3-32B-Instruct | 1396348 | 1.68 | 146213 | 7546 | 5.2 | 2.6 | 1 | 2.3745 | 2.2573 | 131072 |
| Gensyn/Qwen2.5-1.5B-Instruct | 1393944 | 1.68 | 151643 | 7931 | 5.2 | 2.6 | 1 | 2.181 | 0.3505 | 131072 |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | 1393945 | 1.68 | 151643 | 7932 | 5.2 | 2.6 | 1 | 2.6367 | 0.3489 | 16384 |
| Vikhrmodels/Vikhr-YandexGPT-5-Lite-8B-it | 1481531 | 1.78 | 129024 | 7306 | 5.7 | 2.5 | 1 | 2.2853 | 1.3855 | 16384 |
| yandex/YandexGPT-5-Lite-8B-instruct | 1481531 | 1.78 | 129024 | 7306 | 5.7 | 2.5 | 1 | 2.359 | 1.2527 | 1E+030 |
| IlyaGusev/saiga_yandexgpt_8b | 1481531 | 1.78 | 129024 | 7306 | 5.7 | 2.5 | 1 | 2.5027 | 2.1723 | 32768 |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | 1536792 | 1.85 | 32000 | 6322 | 19.8 | 2.4 | 1 | 4.2253 | 0.6623 | 2048 |
| openai-community/gpt2 | 1550846 | 1.87 | 50257 | 6680 | 13.3 | 2.4 | 1 | 2.7572 | 0.7449 | 1024 |
| facebook/opt-125m | 1550847 | 1.87 | 50265 | 6681 | 13.3 | 2.4 | 1 | 2.4144 | 0.6391 | 1E+030 |
| ai-forever/ruGPT-3.5-13B | 1828262 | 2.2 | 50257 | 3881 | 7.7 | 2.0 | 0 | 2.1597 | 0.7194 | 2048 |
| ai-forever/rugpt3small_based_on_gpt2 | 1925501 | 2.32 | 50257 | 3697 | 7.4 | 1.9 | 0 | 1.9954 | 0.8262 | 2048 |
| loim/whiff-tokenizer-8k | 2123707 | 2.55 | 8192 | 2709 | 33.1 | 1.7 | 1 | 2.4541 | 0.3799 | 32768 |
</details> |
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-2-seed-28-2025-06-19 | morturr | 2025-06-19T17:32:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T17:31:55Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-2-seed-28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-2-seed-28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
oukacisarah/fine_tunning_Qwen1.7B_with_DziriFake | oukacisarah | 2025-06-19T17:31:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T21:45:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
redmint/studybuddy-ai | redmint | 2025-06-19T17:30:04Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"study-assistant",
"emotionally-intelligent",
"casual-ai",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-10T17:33:06Z | ---
license: cc-by-nc-4.0
tags:
- text-generation
- study-assistant
- emotionally-intelligent
- casual-ai
inference: true
language:
- en
library_name: transformers
pipeline_tag: text-generation
widget:
- text: Can you help me study math?
---
---
base_model: microsoft/phi-2
library_name: peft
tags:
- study-assistant
- education
- phi-2
- fine-tuned
- emotional-intelligence
license: mit
inference: true
---
# Study Buddy AI (Lite)
A fine-tuned version of Microsoft's Phi-2, trained to act as a casual, emotionally intelligent study assistant that helps users learn deeply through conversation. Designed for real-time interaction inside a lightweight web app.
 <!-- Replace this with your logo when ready -->
## Model Details
### Model Description
Study Buddy AI Lite is built on `microsoft/phi-2` and fine-tuned using PEFT (Parameter-Efficient Fine-Tuning). It was trained to behave like a study partner — casual, kind, encouraging, and deeply helpful for academic subjects and personal growth.
### Features
- Friendly tone tailored for Gen Z and students
- Explains complex topics using simple language
- Emotionally intelligent responses — supportive and motivational
- Great for studying, reviewing, or breaking down hard concepts
### Use Cases
- Personal tutoring via chat
- Homework help and study review
- Emotional motivation during learning sessions
- Supplementing lessons with a relatable assistant
### Limitations
- May hallucinate and respond as user which you can cut out in frontend files
- Best used in short back-and-forths, not long essays
## Training & Fine-Tuning
- **Base model**: `microsoft/phi-2`
- **Fine-tuning method**: LoRA via the `peft` library
- **Custom dataset**: 3,000+ curated prompts focused on simplified teaching, emotional tone, and helpful behavior
- **Hosted via**: Hugging Face Inference Endpoints (GPU-backed)
## How It Works
Frontend ➝ Flask API ➝ Hugging Face Model ➝ Response ➝ KnowBetter Logic (to clean output)
Built to run inside a React + Flask stack with CORS enabled and token-secured backend access to Hugging Face.
## Model Creator
**Creator**: Micah Miles
**GitHub/Contact**: redmint-him
**Team**: Redmint
**Project**: Study Buddy Lite
## License
This model is licensed under the **Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)**.
You are free to:
- Use the model for **personal or educational purposes**
- Modify, remix, or build upon it
**You may not**:
- Sell, sublicense, or use the model or its outputs for any commercial purposes
© 2025 Micah Miles
## NOTICE
This project is licensed for **non-commercial use only**.
For commercial licensing or partnership inquiries, contact: [email protected]
---
✨ “Never back down. Never give up.” — This is for every student who wanted to learn differently.
I'm just a kid with a dream. |
Hailay/fasttext-tigrinya | Hailay | 2025-06-19T17:29:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T17:20:40Z | ---
license: apache-2.0
---
|
Quantumhash/PixelScribe | Quantumhash | 2025-06-19T17:28:21Z | 127 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"image-generation",
"Diffusers",
"PixelScribe",
"flux",
"en",
"license:other",
"diffusers:PixelScribePipeline",
"region:us"
] | text-to-image | 2025-04-20T16:45:54Z | ---
language:
- en
license: other
license_name: pixelscribe-non-commercial-license
license_link: LICENSE.md
tags:
- text-to-image
- image-generation
- Diffusers
- PixelScribe
- flux
base_model:
- sbapan41/PixelScribe
new_version: sbapan41/PixelScribe
pipeline_tag: text-to-image
library_name: diffusers
---
<div align="center">
<img src="https://huggingface.co/datasets/Quantamhash/Assets/resolve/main/images/dark_logo.png"
alt="Title card"
style="width: 500px;
height: auto;
object-position: center top;">
</div>
`PixelScribe` is a 12. billion parameter rectified flow transformer capable of generating images from text descriptions.
# Key Features
1. Cutting-edge output quality, second only to our state-of-the-art model `PixelScribe`.
2. Competitive prompt following, matching the performance of closed source alternatives .
3. Trained using guidance distillation, making `PixelScribe` more efficient.
4. Open weights to drive new scientific research, and empower artists to develop innovative workflows.
5. Generated outputs can be used for personal, scientific, and commercial purposes as described
# Usage
We provide a reference implementation of `PixelScribe`, as well as sampling code, in a dedicated [github repository](https://github.com/MotoBwi/PixelScribe.git).
Developers and creatives looking to build on top of `PixelScribe` are encouraged to use this as a starting point.
## Diffusers
To use `PixelScribe` with the 🧨 diffusers python library, first install or upgrade diffusers
```shell
pip install -U diffusers
```
Then you can use `FluxPipeline` to run the model
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("sbapan41/PixelScribe", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("PixelScribe.png")
```
---
# Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate output that matches the prompts.
- Prompt following is heavily influenced by the prompting-style.
# Out-of-Scope Use
The model and its derivatives may not be used
- In any way that violates any applicable national, federal, state, local or international law or regulation.
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; including but not limited to the solicitation, creation, acquisition, or dissemination of child exploitative content.
- To generate or disseminate verifiably false information and/or content with the purpose of harming others.
- To generate or disseminate personal identifiable information that can be used to harm an individual.
- To harass, abuse, threaten, stalk, or bully individuals or groups of individuals.
- To create non-consensual nudity or illegal pornographic content.
- For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation.
- Generating or facilitating large-scale disinformation campaigns.
# License
This model falls under the [`PixelScribe` Non-Commercial License]. |
lmstudio-community/Skywork-SWE-32B-GGUF | lmstudio-community | 2025-06-19T17:25:44Z | 0 | 0 | null | [
"gguf",
"swe-bench",
"text-generation",
"base_model:Skywork/Skywork-SWE-32B",
"base_model:quantized:Skywork/Skywork-SWE-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-19T17:04:08Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: Skywork/Skywork-SWE-32B
license: apache-2.0
base_model_relation: quantized
tags:
- swe-bench
metrics:
- pass@1
---
## 💫 Community Model> Skywork SWE 32B by Skywork
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Skywork](https://huggingface.co/Skywork)<br>
**Original model**: [Skywork-SWE-32B](https://huggingface.co/Skywork/Skywork-SWE-32B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5697](https://github.com/ggerganov/llama.cpp/releases/tag/b5697)<br>
## Technical Details
Supports a context length of 32k tokens
Skywork-SWE-32B is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks
Skywork-SWE-32B attains 38.0% pass@1 accuracy on the [SWE-bench Verified](https://www.swebench.com) benchmark, outperforming previous open-source SoTA [Qwen2.5-Coder-32B-based](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) LLMs built on the [OpenHands](https://github.com/All-Hands-AI/OpenHands) agent framework
When incorporated with test-time scaling techniques, the performance further improves to 47.0% accuracy, surpassing the previous SoTA results for sub-32B parameter models
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
mradermacher/ICONN-e1-GGUF | mradermacher | 2025-06-19T17:23:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"emotional-ai",
"ICONN",
"chatbot",
"base",
"en",
"base_model:ICONNAI/ICONN-e1",
"base_model:quantized:ICONNAI/ICONN-e1",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T01:19:17Z | ---
base_model: ICONNAI/ICONN-e1
extra_gated_fields:
Country: country
Date of agreement: date_picker
Full name: text
I agree to all terms in the ICONN AI License Agreement, including:
options:
- I will NOT use this model for commercial purposes without explicit written permission.
- I will NOT redistribute, upload, or share this model in any public or private
repository.
- I will NOT train new models or derivatives from this model.
- I will NOT use this model for unethical, harmful, deceptive, exploitative, or
surveillance purposes.
- I understand this license may be revoked if I breach any terms.
type: checkbox
I am using this model for:
options:
- Personal use
- Internal business use
- Academic research
- Educational purposes
- label: Other (explain below)
value: other
type: select
Organization (if any): text
Purpose explanation (if "Other"): text
extra_gated_prompt: |
By accessing or downloading this model, you agree to the ICONN AI License Agreement. This includes restrictions on commercial use, redistribution, derivative model training, and uploading to public or private repositories. You may not use this model to harm, surveil, deceive, exploit, manipulate, or conduct unethical AI research. All use must comply with ethical standards and respect human dignity.
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: iconn
quantized_by: mradermacher
tags:
- emotional-ai
- ICONN
- chatbot
- base
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ICONNAI/ICONN-e1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ICONN-e1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q2_K.gguf) | Q2_K | 30.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q3_K_S.gguf) | Q3_K_S | 36.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q3_K_M.gguf) | Q3_K_M | 40.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q3_K_L.gguf) | Q3_K_L | 43.6 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.IQ4_XS.gguf) | IQ4_XS | 45.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q4_K_S.gguf) | Q4_K_S | 47.9 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q4_K_M.gguf.part2of2) | Q4_K_M | 51.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q5_K_S.gguf.part2of2) | Q5_K_S | 57.9 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q5_K_M.gguf.part2of2) | Q5_K_M | 59.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q6_K.gguf.part2of2) | Q6_K | 69.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-e1-GGUF/resolve/main/ICONN-e1.Q8_0.gguf.part2of2) | Q8_0 | 89.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Mathlesage/EuroBERT-210m-retrieval-v2 | Mathlesage | 2025-06-19T17:20:06Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"french-retrieval-model",
"french",
"bert",
"retrieval",
"sentence-embeddings",
"sentence-similarity",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-06-19T17:19:43Z |
---
language: fr
license: mit
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- french
- bert
- retrieval
- sentence-embeddings
---
# Mathlesage/EuroBERT-210m-retrieval-v2
Ce modèle est une version de `EuroBERT/EuroBERT-210m` spécialisée (fine-tunée) pour les tâches de *retrieval* et de similarité sémantique en français.
Il a été entraîné en utilisant une stratégie de `TripletLoss` sur le jeu de données `Mathlesage/wiki_positive_negative`.
## Utilisation (avec sentence-transformers)
Pour utiliser ce modèle, vous aurez besoin de la bibliothèque `sentence-transformers` et du code des classes personnalisées `FrenchRetrievalConfig` et `FrenchRetrievalModel`.
1. Installez les bibliothèques :
```bash
pip install -U sentence-transformers transformers
```
2. Sauvegardez le code des classes `FrenchRetrievalConfig` et `FrenchRetrievalModel` dans un fichier, par exemple `model.py`.
3. Utilisez le modèle :
```python
from sentence_transformers import SentenceTransformer
from model import FrenchRetrievalModel # Assurez-vous que le fichier model.py est accessible
model_name = "Mathlesage/EuroBERT-210m-retrieval-v2"
model = SentenceTransformer(model_name)
sentences = ["Ceci est une phrase d'exemple.", "Chaque phrase est convertie en vecteur."]
embeddings = model.encode(sentences)
print(embeddings.shape)
```
|
aleegis/cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8 | aleegis | 2025-06-19T17:18:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T17:10:16Z | ---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
library_name: transformers
model_name: cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aleegis/cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fajarchen-fajar-chen/Gradients-On-Demand/runs/s37kl4yr)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phospho-app/joshvista-ACT_BBOX-PickAndPlace-b9me0 | phospho-app | 2025-06-19T17:17:37Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T17:16:44Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Parquet file /__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/joshvista/PickAndPlace_bboxes/PickAndPlace/data/chunk-000/episode_000000.parquet does not contain 'observation.environment_state' key. This is unexpected after computing bounding boxes.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
ThisIsAoT/jotaro-mistral-v2 | ThisIsAoT | 2025-06-19T17:14:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T17:12:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
moyixiao/qwen25_mimo_r32_2920 | moyixiao | 2025-06-19T17:11:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T17:10:57Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ryzax/qwen3_1.7B_sft_correct_v5_new_1e-5_4 | ryzax | 2025-06-19T17:11:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T17:01:13Z | ---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: qwen3_1.7B_sft_correct_v5_new_1e-5_4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen3_1.7B_sft_correct_v5_new_1e-5_4
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ryzax/qwen3_1.7B_sft_correct_v5_new_1e-5_4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zc096373/s1/runs/4cif7vm4)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-18-2025-06-19 | morturr | 2025-06-19T17:08:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T17:07:37Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-18-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-18-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
SidMcStarter/legal-llama3-safetensors | SidMcStarter | 2025-06-19T17:01:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:SidMcStarter/legal-llama3-lora-model",
"base_model:quantized:SidMcStarter/legal-llama3-lora-model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-19T17:00:20Z | ---
base_model: SidMcStarter/legal-llama3-lora-model
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SidMcStarter
- **License:** apache-2.0
- **Finetuned from model :** SidMcStarter/legal-llama3-lora-model
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
acezxn/SOC_Activity_Generation_Base_Llama_3B | acezxn | 2025-06-19T17:00:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:acezxn/ACI_Cyber_Base_Llama_3B",
"base_model:finetune:acezxn/ACI_Cyber_Base_Llama_3B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T19:36:04Z | ---
base_model: acezxn/ACI_Cyber_Base_Llama_3B
library_name: transformers
model_name: SOC_Activity_Generation_Base_Llama_3B
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for SOC_Activity_Generation_Base_Llama_3B
This model is a fine-tuned version of [acezxn/ACI_Cyber_Base_Llama_3B](https://huggingface.co/acezxn/ACI_Cyber_Base_Llama_3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="acezxn/SOC_Activity_Generation_Base_Llama_3B", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pot99rta/PatriSlush-DarkRPReign-12B | pot99rta | 2025-06-19T16:56:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:Aleteian/Darkness-Reign-MN-12B",
"base_model:merge:Aleteian/Darkness-Reign-MN-12B",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:merge:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS",
"base_model:merge:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS",
"base_model:SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2",
"base_model:merge:SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2",
"base_model:mergekit-community/Irix-12B_Slush",
"base_model:merge:mergekit-community/Irix-12B_Slush",
"base_model:pot99rta/Patricide-12B-Forgottenslop-Mell",
"base_model:merge:pot99rta/Patricide-12B-Forgottenslop-Mell",
"base_model:yamatazen/LorablatedStock-12B",
"base_model:merge:yamatazen/LorablatedStock-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T16:28:18Z | ---
base_model:
- Aleteian/Darkness-Reign-MN-12B
- yamatazen/LorablatedStock-12B
- ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
- DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
- SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2
- pot99rta/Patricide-12B-Forgottenslop-Mell
- mergekit-community/Irix-12B_Slush
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [pot99rta/Patricide-12B-Forgottenslop-Mell](https://huggingface.co/pot99rta/Patricide-12B-Forgottenslop-Mell) as a base.
### Models Merged
The following models were included in the merge:
* [Aleteian/Darkness-Reign-MN-12B](https://huggingface.co/Aleteian/Darkness-Reign-MN-12B)
* [yamatazen/LorablatedStock-12B](https://huggingface.co/yamatazen/LorablatedStock-12B)
* [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2)
* [DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS](https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS)
* [SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2](https://huggingface.co/SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2)
* [mergekit-community/Irix-12B_Slush](https://huggingface.co/mergekit-community/Irix-12B_Slush)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: pot99rta/Patricide-12B-Forgottenslop-Mell
models:
- model: mergekit-community/Irix-12B_Slush
- model: yamatazen/LorablatedStock-12B
- model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
- model: SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2
- model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
- model: Aleteian/Darkness-Reign-MN-12B
merge_method: model_stock
dtype: bfloat16
out_dtype: bfloat16
parameters:
normalize: true
tokenizer:
source: union
```
|
SidMcStarter/legal-llama3-lora-model | SidMcStarter | 2025-06-19T16:55:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T16:54:22Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SidMcStarter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
momit3/q-FrozenLake-v1-4x4-noSlippery | momit3 | 2025-06-19T16:54:52Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-19T16:54:50Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="momit3/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb | BootesVoid | 2025-06-19T16:53:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T16:53:54Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ROSE1
---
# Cmc3D1Faf001Fnx8D5X3L2V42_Cmc3Klurq00O3Nx8D09Rr3Wcb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ROSE1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ROSE1",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb', weight_name='lora.safetensors')
image = pipeline('ROSE1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3d1faf001fnx8d5x3l2v42_cmc3klurq00o3nx8d09rr3wcb/discussions) to add images that show off what you’ve made with this LoRA.
|
diffusion-reasoning/sudoku-wll-neg-pos-2500 | diffusion-reasoning | 2025-06-19T16:53:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:GSAI-ML/LLaDA-8B-Instruct",
"base_model:adapter:GSAI-ML/LLaDA-8B-Instruct",
"region:us"
] | null | 2025-06-19T16:53:16Z | ---
base_model: GSAI-ML/LLaDA-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
M-o-r-p-h-e-u-s/Qwen2.5-0.5B-Instruct-R1-Lobotomy | M-o-r-p-h-e-u-s | 2025-06-19T16:52:44Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"en",
"dataset:OpenBuddy/R1-0528-Distill",
"dataset:cognitivecomputations/dolphin-r1",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T17:53:07Z | ---
license: apache-2.0
datasets:
- OpenBuddy/R1-0528-Distill
- cognitivecomputations/dolphin-r1
language:
- en
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
---
# Qwen 2.5 0.5B - R1 Lobotomy
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [OpenBuddy/R1-0528-Distill](https://huggingface.co/datasets/OpenBuddy/R1-0528-Distill) (magicoder-oss-distilled-r1-0528.jsonl) and the [cognitivecomputations/dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1) (dolphin-r1-reasoning-deepseek.jsonl) datasets.
---
## Model description
This is my first attempt at finetuning a model.<br>
I started with the instruct version (non-reasoning) and my goal was to have it reason its way to an answer.<br>
The training was performed using LLaMA-Factory.<br>
<br>
The resulting model is weird, I've played a little with it and it can ramble on producing its own question/answer pairs often without "thinking", or even "thinking" out loud.<br>
<br>
Prompting it via the system message to properly reason and use the correct tags seems to fix it a bit.<br>
This is system message I used:<br>
```
You are a helpful assistant that thinks before it answers.
You always begin your answer with `<think>`, followed by your reasoning, and once you end your thinking you mark it with`</think>`.
You the proceed to provide your final answer.
```
An example of this approach is:<br>
```
Produce a python function that calculates the fibonacci sequence.
```
```
<think>Okay, I need to create a Python function called fibonacci that returns an array of Fibonacci numbers starting from 0 up to n. Let's think about how to approach this.
First, understanding the problem. The Fibonacci sequence is a series where each number is the sum of the two preceding ones, usually starting with 0 and 1. So for example, the first few terms would be 0, 1, 1, 2, 3, etc.
The Fibonacci sequence starts with 0 and 1. So if we start generating the first two terms, they should be 0 and 1. Then the next term would be 0 + 1 = 1, then 1 + 0 = 1, and so on.
But wait, what about larger values? Like when we generate the 6th term or higher? Well, according to the definition, each subsequent term depends on the previous two. For example, the third term is 0 + 1 = 1, the fourth is 1 + 1 = 2, etc. So even though the sequence grows quickly, there are only a finite number of terms (like the nth term is determined by the (n-1)th and (n-2)th terms).
So the task is to write a function that generates the Fibonacci sequence iteratively until the desired index is reached. Because the recursion depth in Python can get really high, perhaps using a loop with memoization could help here. But maybe a generator is more efficient for large n?
Let me outline the steps:
Handle edge cases: If n is negative, return an empty list.
Start with initial conditions: n=0, fib[0] = 0, fib[1] = 1.
Generate the sequence iteratively:
a. Compute the current term as fib[n].
b. Create a new tuple containing both fib[n-1] and fib[n], but prepend the current term to the tuple to avoid duplicate entries.
Return the resulting list after processing all elements up to n.
Wait, let me check if that makes sense. The Fibonacci sequence is known to have O(n) time complexity because each step uses O(1) space, and generating the entire sequence takes linear time relative to n.
Now, implementing this logic. Using a helper function like fibonacci_iterative to compute the sequence efficiently. Also, handling the base case for n=0. Let's put it all together.</think>
To solve this problem, I'll create a function that computes the Fibonacci sequence iteratively and handles edge cases. The iterative version will work efficiently even for large n without recursion depth issues. Here's a possible implementation:
def fibonacci(n):
"""
Returns the nth Fibonacci number iteratively.
Args:
n (int): The position in the Fibonacci sequence (must be non-negative)
Returns:
int: The nth Fibonacci number
"""
if n < 0:
raise ValueError("n must be non-negative")
# Initialize the first two Fibonacci numbers
fib_sequence = [0, 1]
# Generate the sequence iteratively
for i in range(2, n+1):
next_term = fib_sequence[i-1] + fib_sequence[i-2]
fib_sequence.append(next_term)
return fib_sequence[-1]
# Example usage:
print(fibonacci(7)) # Output: 8
This solution uses a loop to iterate through the sequence while maintaining the last two computed terms. This allows us to efficiently generate the entire sequence in O(n) time, making it suitable for large n without performance issues. The initial conditions ensure that the sequence correctly starts at 0 and 1.
```
Keep in mind that it can also think without giving you an answer outside the "thinking" tags, or include it inside that section.
---
## Intended uses & limitations
Feel free to use this model however you want, while adhering to Qwen's original license.
---
## Training procedure
I trained this model on the 2 datasets provided above (take note that I only used the specified files, not the entire datasets). The training lasted for 3 epochs (arbitrarily set).<br>
<br>
Due to vRAM limitations I took 3 actions:
- Freeze the model and only train the final 5 layers.
- Limit the context to 4096 tokens.
- Filter the datasets to 12k maximum characters and 10k maximum characters respectively.
I reduced the learning rate to `5e-06` as I didn't want to completely obliterate it.<br>
<br>
The model seems to have learned but slowed down dramatically and rapidly.<br>
It began with a loss at about ~1.3 and ended at about ~0.9.<br>
Find in the `trainer_log.jsonl` the complete training log step by step.<br>
The training went on for a little over 2 days on my poor 3060 12gb.<br>
During the training, the model was fed about 1.1 trillion tokens.<br>
Finally, I have no idea how the 3 epochs at 4096 context length affected its ability to handle longer sequences.<br>
<br>
Loss progression across 3 epochs:

---
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 20
- num_epochs: 3.0
<!-- end of the list -->
For the complete training configuration, please see `training_args.yaml` and/or `llamaboard_config.yaml`.
---
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
---
Have fun with this scoundrel of a model and please do get in touch if you have anything you want to relay, fun chat examples, advice, or anything else!<br>
Cya! |
Evilafo/class-categorie | Evilafo | 2025-06-19T16:51:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T16:50:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mednikov/RL_Lesson_2 | mednikov | 2025-06-19T16:49:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-19T16:49:25Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 534.50 +/- 34.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mednikov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mednikov -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mednikov
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Official-mezzo-fun-18-19-Viral-videos/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | Official-mezzo-fun-18-19-Viral-videos | 2025-06-19T16:45:04Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T16:43:30Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
full-18-portal-zacarias-hospital-teresopol/full.video.18.hospital.teresopolis.hospital.de.teresopolis.video.portal.zacarias | full-18-portal-zacarias-hospital-teresopol | 2025-06-19T16:34:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T16:33:39Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
|
andreagasparini/ModernBERT-base-multilingual-stress | andreagasparini | 2025-06-19T16:33:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:clapAI/modernBERT-base-multilingual-sentiment",
"base_model:finetune:clapAI/modernBERT-base-multilingual-sentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T16:32:59Z | ---
library_name: transformers
license: apache-2.0
base_model: clapAI/modernBERT-base-multilingual-sentiment
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [clapAI/modernBERT-base-multilingual-sentiment](https://huggingface.co/clapAI/modernBERT-base-multilingual-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1349
- F1: 0.8234
- F1 Macro: 0.8227
- F1 Micro: 0.8239
- Precision: 0.8250
- Precision Macro: 0.8257
- Precision Micro: 0.8239
- Recall: 0.8239
- Recall Macro: 0.8218
- Recall Micro: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | F1 Macro | F1 Micro | Precision | Precision Macro | Precision Micro | Recall | Recall Macro | Recall Micro |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:--------:|:---------:|:---------------:|:---------------:|:------:|:------------:|:------------:|
| No log | 1.0 | 160 | 0.3541 | 0.8522 | 0.8520 | 0.8521 | 0.8536 | 0.8523 | 0.8521 | 0.8521 | 0.8531 | 0.8521 |
| No log | 2.0 | 320 | 0.6458 | 0.8485 | 0.8480 | 0.8486 | 0.8486 | 0.8486 | 0.8486 | 0.8486 | 0.8477 | 0.8486 |
| No log | 3.0 | 480 | 1.1349 | 0.8234 | 0.8227 | 0.8239 | 0.8250 | 0.8257 | 0.8239 | 0.8239 | 0.8218 | 0.8239 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.0+cu124
- Datasets 2.16.1
- Tokenizers 0.21.1
|
momererkoc/qwen3-14b-lozan | momererkoc | 2025-06-19T16:27:46Z | 0 | 0 | null | [
"safetensors",
"history",
"diplomatic",
"qwen3",
"tr",
"base_model:unsloth/Qwen3-14B",
"base_model:finetune:unsloth/Qwen3-14B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T15:40:40Z | ---
license: apache-2.0
language:
- tr
base_model:
- unsloth/Qwen3-14B
tags:
- history
- diplomatic
- qwen3
---
# Fine-tuned Qwen3-14B Model
## Model Description
This is a fine-tuned version of the Qwen3-14B model optimized with Unsloth and 4-bit quantization for efficient inference. The model was fine-tuned on a curated historical text corpus focused on early 20th-century diplomatic events and treaties, enabling nuanced understanding and generation of complex political and historical narratives.
## Key Features
- 2x faster training using Unsloth optimization
- 4-bit quantization for reduced memory footprint and efficient inference
- Enhanced understanding of historical and diplomatic discourse
- Compatible with Hugging Face’s TRL library
- Supports LoRA fine-tuning for flexible adaptation
## Model Details
- **Base Model:** unsloth/qwen3-14b-unsloth-bnb-4bit
- **License:** Apache 2.0
- **Fine-tuned by:** momererkoc
- **Languages:** Primarily Turkish, with English capabilities
- **Quantization:** 4-bit via BitsAndBytes
## Training Data
- Extensive historical text collection focusing on early 20th-century diplomacy and treaty analysis
- Texts rich in complex political context and multi-layered narrative
## Usage Example
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "momererkoc/qwen3-14b-lozan", # YOUR MODEL YOU USED FOR TRAINING
max_seq_length = 2048,
load_in_4bit = True,
token="hf_...",
)
messages = [
{"role" : "user", "content" : "Lozan anlaşmasının sonuçları nelerdir?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True, # Must add for generation
enable_thinking = False, # Disable thinking
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 256, # Increase for longer outputs!
temperature = 0.7, top_p = 0.8, top_k = 20, # For non thinking
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
## Performance
- 2x faster fine-tuning compared to standard methods
- Reduced GPU memory usage due to 4-bit quantization
- Strong contextual understanding of historical and political texts
- Supports generation of coherent, context-aware historical narratives
## Optimization Details
- Uses Unsloth for accelerated training
- 4-bit quantization via BitsAndBytes
- LoRA for parameter-efficient fine-tuning
## Acknowledgments
- Unsloth team for optimization tools
- Hugging Face for the transformers ecosystem
- Dataset creators for curated historical texts
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
smerchi/question_classification_3 | smerchi | 2025-06-19T16:26:37Z | 0 | 0 | null | [
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"region:us"
] | null | 2025-06-19T16:10:39Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: question_classification_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_classification_3
This model is a fine-tuned version of [SI2M-Lab/DarijaBERT](https://huggingface.co/SI2M-Lab/DarijaBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0580
- Accuracy: 0.9915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1843 | 1.0 | 706 | 0.1116 | 0.9780 |
| 0.0608 | 2.0 | 1412 | 0.1144 | 0.9851 |
| 0.0333 | 3.0 | 2118 | 0.0809 | 0.9851 |
| 0.0223 | 4.0 | 2824 | 0.0828 | 0.9887 |
| 0.0098 | 5.0 | 3530 | 0.0990 | 0.9880 |
| 0.0088 | 6.0 | 4236 | 0.0580 | 0.9915 |
| 0.0064 | 7.0 | 4942 | 0.0660 | 0.9908 |
| 0.0015 | 8.0 | 5648 | 0.1260 | 0.9851 |
| 0.0018 | 9.0 | 6354 | 0.1184 | 0.9873 |
| 0.0 | 10.0 | 7060 | 0.1183 | 0.9880 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.6.0+cu124
- Datasets 2.21.0
- Tokenizers 0.13.3
|
Zigra/Snow_009 | Zigra | 2025-06-19T16:24:15Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-16T06:43:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-tutorial-Jobz-Hunting-17-Viral-Videos/MMS.FULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official | New-tutorial-Jobz-Hunting-17-Viral-Videos | 2025-06-19T16:21:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T16:21:15Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
|
mchettih/financial_QA_finetuned_teacher_merged | mchettih | 2025-06-19T16:21:03Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T16:21:02Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mchettih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-tutorial-Jobz-Hunting-88-Viral-Videos/FULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official.link | New-tutorial-Jobz-Hunting-88-Viral-Videos | 2025-06-19T16:16:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T16:15:23Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
|
Zigra/Snow_004 | Zigra | 2025-06-19T16:12:03Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-15T15:19:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B | DAMO-NLP-SG | 2025-06-19T16:09:52Z | 19 | 2 | transformers | [
"transformers",
"safetensors",
"videollama3_qwen2",
"text-generation",
"multimodal large language model",
"large video-language model",
"video-text-to-text",
"en",
"arxiv:2406.07476",
"arxiv:2501.00599",
"arxiv:2504.16072",
"arxiv:2501.13106",
"base_model:DAMO-NLP-SG/VideoLLaMA3-2B-Image",
"base_model:finetune:DAMO-NLP-SG/VideoLLaMA3-2B-Image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | video-text-to-text | 2025-06-17T14:33:56Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: video-text-to-text
tags:
- multimodal large language model
- large video-language model
base_model:
- DAMO-NLP-SG/VideoLLaMA3-2B-Image
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64a3fe3dde901eb01df12398/ZrZPYT0Q3wgza7Vc5BmyD.png" width="100%" style="margin-bottom: 0.2;"/>
<p>
<h3 align="center"><a href="https://arxiv.org/abs/2406.07476" style="color:#4D2B24">
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM</a></h3>
<h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoRefer">Github</a> for the latest update. </h2>
<div style="display: flex; justify-content: center; margin-top: 10px;">
<a href="https://arxiv.org/pdf/2501.00599"><img src="https://img.shields.io/badge/Arxiv-2501.00599-ECA8A7" style="margin-right: 5px;"></a>
<a href="https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3"><img src='https://img.shields.io/badge/HuggingFace-Demo-96D03A' style="margin-right: 5px;"></a>
<a href="https://github.com/DAMO-NLP-SG/VideoRefer"><img src='https://img.shields.io/badge/Github-VideoRefer-F7C97E' style="margin-right: 5px;"></a>
<a href="https://github.com/DAMO-NLP-SG/VideoLLaMA3"><img src='https://img.shields.io/badge/Github-VideoLLaMA3-9DC3E6' style="margin-right: 5px;"></a>
</div>
## 📰 News
* **[2025.6.19]** 🔥We release the [demo](https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3) of VideoRefer-VideoLLaMA3, hosted on HuggingFace. Feel free to try it!
* **[2025.6.18]** 🔥We release a new version of VideoRefer([VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) and [VideoRefer-VideoLLaMA3-2B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B)), which are trained based on [VideoLLaMA3](https://github.com/DAMO-NLP-SG/VideoLLaMA3).
* **[2025.4.22]** 🔥Our VideoRefer-Bench has been adopted in [Describe Anything Model](https://arxiv.org/pdf/2504.16072) (NVIDIA & UC Berkeley).
* **[2025.2.27]** 🔥VideoRefer Suite has been accepted to CVPR2025!
* **[2025.2.18]** 🔥We release the [VideoRefer-700K dataset](https://huggingface.co/datasets/DAMO-NLP-SG/VideoRefer-700K) on HuggingFace.
* **[2025.1.1]** 🔥We release [VideoRefer-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B), the code of VideoRefer and the [VideoRefer-Bench](https://huggingface.co/datasets/DAMO-NLP-SG/VideoRefer-Bench).
## 🌏 Model Zoo
| Model Name | Visual Encoder | Language Decoder |
|:----------------|:----------------|:------------------|
| [VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) | [VL3-SigLIP-NaViT](https://huggingface.co/DAMO-NLP-SG/VL3-SigLIP-NaViT) | [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) |
| [VideoRefer-VideoLLaMA3-2B (This Checkpoint)](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B) | [VL3-SigLIP-NaViT](https://huggingface.co/DAMO-NLP-SG/VL3-SigLIP-NaViT) | [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) |
| [VideoRefer-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B) | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) |
| [VideoRefer-7B-stage2](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B-stage2) | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) |
| [VideoRefer-7B-stage2.5](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B-stage2.5) | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) |
## 📑 Citation
If you find VideoRefer Suite useful for your research and applications, please cite using this BibTeX:
```bibtex
@InProceedings{Yuan_2025_CVPR,
author = {Yuan, Yuqian and Zhang, Hang and Li, Wentong and Cheng, Zesen and Zhang, Boqiang and Li, Long and Li, Xin and Zhao, Deli and Zhang, Wenqiao and Zhuang, Yueting and Zhu, Jianke and Bing, Lidong},
title = {VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {18970-18980}
}
@article{damonlpsg2025videollama3,
title={VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding},
author={Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao},
journal={arXiv preprint arXiv:2501.13106},
year={2025},
url = {https://arxiv.org/abs/2501.13106}
}
``` |
ilybawkugo/lora_llama_1e5_4_8_1024 | ilybawkugo | 2025-06-19T16:07:36Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T16:07:35Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ilybawkugo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-7-2025-06-19 | morturr | 2025-06-19T16:02:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T16:02:39Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-1-seed-7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Samas21/S3r135 | Samas21 | 2025-06-19T16:00:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-19T17:25:20Z | ---
license: apache-2.0
---
|
Khruna/jt | Khruna | 2025-06-19T16:00:01Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-06-19T15:59:35Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# jt
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/jt/tree/main) them in the Files & versions tab.
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb3-seed42-2025-06-19 | morturr | 2025-06-19T15:58:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T14:34:44Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb3-seed42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb3-seed42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
mlx-community/Dolphin-Mistral-24B-Venice-Edition-mlx-8Bit | mlx-community | 2025-06-19T15:54:14Z | 0 | 1 | mlx | [
"mlx",
"safetensors",
"mistral",
"uncensored",
"dolphin",
"venice",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"8-bit",
"region:us"
] | null | 2025-06-19T15:12:04Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- mlx
- uncensored
- dolphin
- venice
---
# mlx-community/Dolphin-Mistral-24B-Venice-Edition-mlx-8Bit
MLX 8bit quant of [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) released **2025-06-12**.
*"This is an updated version based on feedback we received on v1"*, see [discussion](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition/discussions/1) at original repo.
The Model [mlx-community/Dolphin-Mistral-24B-Venice-Edition-mlx-8Bit](https://huggingface.co/mlx-community/Dolphin-Mistral-24B-Venice-Edition-mlx-8Bit) was converted to MLX format from [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Dolphin-Mistral-24B-Venice-Edition-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Khruna/hs | Khruna | 2025-06-19T15:53:55Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-06-19T15:53:34Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still002.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# hs
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/hs/tree/main) them in the Files & versions tab.
|
QuantTrio/DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Compact | QuantTrio | 2025-06-19T15:53:38Z | 1,529 | 1 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"DeepSeek-R1-0528",
"GPTQ",
"Int4-Int8Mix",
"量化修复",
"vLLM",
"conversational",
"custom_code",
"arxiv:2501.12948",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-06-01T08:46:38Z | ---
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- DeepSeek-R1-0528
- GPTQ
- Int4-Int8Mix
- 量化修复
- vLLM
base_model:
- deepseek-ai/DeepSeek-R1-0528
base_model_relation: quantized
---
# DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Compact
Base mode [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
This repository delivers an Int4 + selectively-Int8 GPTQ `DeepSeek-R1-0528` model: only layers that are highly sensitive to quantization remain in Int8, while the rest stay Int4—preserving generation quality with minimal file-size overhead.
Preliminary trials show that converting the entire model to pure Int4 (AWQ/GPTQ) under the quantization layout used in vLLM’s current DeepSeek-R1 implementation degrades inference accuracy and can produce faulty outputs. Layer-wise fine-grained quantization substantially mitigates this issue.
Temporary patch:
vLLM == 0.9.0 does not yet natively support per-layer quantization for MoE modules.
We added get_moe_quant_method to gptq_marlin.py as an interim fix.
Until the upstream PR is merged, please replace the original file with the one provided in this repo.
Variant Overview
| Variant | Characteristics | File Size | Recommended Scenario |
|-------------|---------------------------------------------------------------------------------------|-----------|--------------------------------------------------------------------------------------------|
| **Lite** | Only the most critical layers upgraded to Int8; size close to pure Int4 | 355 GB | Resource-constrained, lightweight server deployments |
| **Compact** | More Int8 layers, relatively higher output quality | 414 GB | VRAM-sufficient deployments focused on answer quality (e.g., 8 × A100) |
| **Medium** | Compact plus fully-Int8 attention layers; high quality with reduced long-context loss | 445 GB | VRAM-rich deployments needing both top answer quality and high concurrency (e.g., 8 × H20) |
Choose the variant that best matches your hardware and quality requirements.
### 【Model Update Date】
```
2025-05-31
1. fast commit
```
### 【Dependencies】
```
vllm==0.9.0
transformers==4.52.3
```
<div style="
background: rgba(255, 193, 61, 0.15);
padding: 16px;
border-radius: 6px;
border: 1px solid rgba(255, 165, 0, 0.3);
margin: 16px 0;
">
### 【💡Notes on New VLLM Versions💡】
#### 1. Recommend Using V0 Inference Mode
Before launching vLLM, set the environment variable
```
export VLLM_USE_V1=0
```
</div>
<div style="
background: rgba(255, 0, 200, 0.15);
padding: 16px;
border-radius: 6px;
border: 1px solid rgba(255, 0, 200, 0.3);
margin: 16px 0;
">
### 【💡 Patch for gptq_marlin.py💡】
At present, vllm==0.9.0 lacks support for per-layer quantization configurations for the moe module, which will lead to errors when loading the model.
I have implemented a simple fix by adding the get_moe_quant_method function to the gptq_marlin.py file.
Until the PR is merged, please replace the gptq_marlin.py file in your installation with the attached version, placing it at:
```
.../site-packages/vllm/model_executor/layers/quantization/gptq_marlin.py
```
</div>
### 【Model List】
| FILE SIZE | LATEST UPDATE TIME |
|---------|--------------|
| `414GB` | `2025-06-01` |
### 【Model Download】
```python
from huggingface_hub import snapshot_download
snapshot_download('QuantTrio/DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Compact', cache_dir="local_path")
```
## DeepSeek-R1-0528
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://arxiv.org/pdf/2501.12948"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.
## 2. Evaluation Results
### DeepSeek-R1-0528
For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 16 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528
|----------|----------------------------------|-----------------|---|
| General |
| | MMLU-Redux (EM) | 92.9 | 93.4
| | MMLU-Pro (EM) | 84.0 | 85.0
| | GPQA-Diamond (Pass@1) | 71.5 | 81.0
| | SimpleQA (Correct) | 30.1 | 27.8
| | FRAMES (Acc.) | 82.5 | 83.0
| | Humanity's Last Exam (Pass@1) | 8.5 | 17.7
| Code |
| | LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3
| | Codeforces-Div1 (Rating) | 1530 | 1930
| | SWE Verified (Resolved) | 49.2 | 57.6
| | Aider-Polyglot (Acc.) | 53.3 | 71.6
| Math |
| | AIME 2024 (Pass@1) | 79.8 | 91.4
| | AIME 2025 (Pass@1) | 70.0 | 87.5
| | HMMT 2025 (Pass@1) | 41.7 | 79.4 |
| | CNMO 2024 (Pass@1) | 78.8 | 86.9
| Tools |
| | BFCL_v3_MultiTurn (Acc) | - | 37.0 |
| | Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail)
</div>
Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.
## 5. License
This code repository is licensed under [MIT License](LICENSE). The use of DeepSeek-R1 models is also subject to [MIT License](LICENSE). DeepSeek-R1 series (including Base and Chat) supports commercial use and distillation.
## 6. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
denizaybey/Cosmos-Reason1-7B | denizaybey | 2025-06-19T15:50:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"nvidia",
"cosmos",
"conversational",
"en",
"dataset:nvidia/Cosmos-Reason1-SFT-Dataset",
"dataset:nvidia/Cosmos-Reason1-RL-Dataset",
"dataset:nvidia/Cosmos-Reason1-Benchmark",
"arxiv:2503.15558",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-19T15:50:22Z | ---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
datasets:
- nvidia/Cosmos-Reason1-SFT-Dataset
- nvidia/Cosmos-Reason1-RL-Dataset
- nvidia/Cosmos-Reason1-Benchmark
library_name: transformers
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
tags:
- nvidia
- cosmos
---
# **Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models**
[**Cosmos**](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7) | [**Code**](https://github.com/nvidia-cosmos/cosmos-reason1) | [**Paper**](https://arxiv.org/abs/2503.15558) | [**Paper Website**](https://research.nvidia.com/labs/dir/cosmos-reason1)
# Model Overview
## Description:
**Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. These are Physical AI models that can understand space, time, and fundamental physics, and can serve as planning models to reason about the next steps of an embodied agent.
The models are ready for commercial use.
**Model Developer**: NVIDIA
## Model Versions
The Cosmos-Reason1 includes the following model:
- [Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B): Given a text prompt and an input video, think and generate the answer with respect to the input text prompt and video.
### License:
This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [[email protected]](mailto:[email protected]).
Under the NVIDIA Open Model License, NVIDIA confirms:
* Models are commercially usable.
* You are free to create and distribute Derivative Models.
* NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
**Important Note**: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively “Guardrail”) contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
### Deployment Geography:
Global
### Use Case:
Physical AI: Space, time, fundamental physics understanding and embodied reasoning, encompassing robotics, and autonomous vehicles (AV).
### Release Date:
* Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
* Huggingface:
* [06/10/2025](https://huggingface.co/nvidia/Cosmos-Reason1-7B/commit/2464fff43c5c0bfb1916ac8c009feda4aed81be9). Enhanced critic capability for physical plausibility.
* [05/17/2025](https://huggingface.co/nvidia/Cosmos-Reason1-7B/commit/098a5bb62a1f4fc05e5c4ac89aae8005e301aa18). Initial release.
## Model Architecture:
Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.
Network Architecture: Qwen2.5-VL-7B-Instruct.
Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
## Input
**Input Type(s)**: Text+Video/Image
**Input Format(s)**:
* Text: String
* Video: mp4
* Image: jpg
**Input Parameters**:
* Text: One-dimensional (1D)
* Video: Three-dimensional (3D)
* Image: Two-dimensional (2D)
**Other Properties Related to Input**:
* Use `FPS=4` for input video to match the training setup.
* Append `Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>.` in the system prompt to encourage long chain-of-thought reasoning response.
## Output
**Output Type(s)**: Text
**Output Format**: String
**Output Parameters**: Text: One-dimensional (1D)
**Other Properties Related to Output**:
* Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
* Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
## Software Integration
**Runtime Engine(s):**
* [vLLM](https://github.com/vllm-project/vllm)
**Supported Hardware Microarchitecture Compatibility:**
* NVIDIA Blackwell
* NVIDIA Hopper
**Note**: We have only tested doing inference with BF16 precision.
**Operating System(s):**
* Linux (We have not tested on other operating systems.)
# Usage
See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
* Post Training: [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) provides examples of supervised fine-tuning and reinforcement learning on embodied reasoning datasets.
# Evaluation
Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
**Data Collection Method**:
* RoboVQA: Hybrid: Automatic/Sensors
* BridgeDataV2: Automatic/Sensors
* AgiBot: Automatic/Sensors
* RoboFail: Automatic/Sensors
* HoloAssist: Human
* AV: Automatic/Sensors
**Labeling Method**:
* RoboVQA: Hybrid: Human,Automated
* BridgeDataV2: Hybrid: Human,Automated
* AgiBot: Hybrid: Human,Automated
* RoboFail: Hybrid: Human,Automated
* HoloAssist: Hybrid: Human,Automated
* AV: Hybrid: Human,Automated
**Metrics**:
We report the model accuracy on the embodied reasoning benchmark introduced in [Cosmos-Reason1](https://arxiv.org/abs/2503.15558). The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.
| | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Average |
|--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|
| **Accuracy** | 87.3 | 70.8 | 63.7 | 48.9 | 62.7 | 57.2 | 65.1 |
## Dataset Format
Modality: Video (mp4) and Text
## Dataset Quantification
We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
**The AV data is currently unavailable and will be uploaded soon!**
| | [RoboVQA](https://robovqa.github.io/) | AV | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/)| [Agibot](https://github.com/OpenDriveLab/AgiBot-World)| [HoloAssist](https://holoassist.github.io/) | [RoboFail](https://robot-reflect.github.io/) | Total Storage Size |
|--------------------|---------------------------------------------|----------|------------------------------------------------------|------------------------------------------------|------------------------------------------------|------------------------------------------------|--------------------|
| **SFT Data** | 1.14m | 24.7k | 258k | 38.9k | 273k | N/A | **300.6GB** |
| **RL Data** | 252 | 200 | 240 | 200 | 200 | N/A | **2.6GB** |
| **Benchmark Data** | 110 | 100 | 100 | 100 | 100 | 100 | **1.5GB** |
We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
## Inference:
**Test Hardware:** H100, A100, GB200 <br>
> [!NOTE]
> We suggest using `fps=4` for the input video and `max_tokens=4096` to avoid truncated response.
```python
from transformers import AutoProcessor
from vllm import LLM, SamplingParams
from qwen_vl_utils import process_vision_info
# You can also replace the MODEL_PATH by a safetensors folder path mentioned above
MODEL_PATH = "nvidia/Cosmos-Reason1-7B"
llm = LLM(
model=MODEL_PATH,
limit_mm_per_prompt={"image": 10, "video": 10},
)
sampling_params = SamplingParams(
temperature=0.6,
top_p=0.95,
repetition_penalty=1.05,
max_tokens=4096,
)
video_messages = [
{"role": "system", "content": "You are a helpful assistant. Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>."},
{"role": "user", "content": [
{"type": "text", "text": (
"Is it safe to turn right?"
)
},
{
"type": "video",
"video": "file:///path/to/your/video.mp4",
"fps": 4,
}
]
},
]
# Here we use video messages as a demonstration
messages = video_messages
processor = AutoProcessor.from_pretrained(MODEL_PATH)
prompt = processor.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
mm_data = {}
if image_inputs is not None:
mm_data["image"] = image_inputs
if video_inputs is not None:
mm_data["video"] = video_inputs
llm_inputs = {
"prompt": prompt,
"multi_modal_data": mm_data,
# FPS will be returned in video_kwargs
"mm_processor_kwargs": video_kwargs,
}
outputs = llm.generate([llm_inputs], sampling_params=sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
### Plus Plus (++) Promise
We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
* Verified to comply with current applicable disclosure laws, regulations, and industry standards.
* Verified to comply with applicable privacy labeling requirements.
* Annotated to describe the collector/source (NVIDIA or a third-party).
* Characterized for technical limitations.
* Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
* Reviewed before release.
* Tagged for known restrictions and potential safety implications.
### Bias
| Field | Response |
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
| Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
| Measures taken to mitigate against unwanted bias: | The training video sources contain multiple physical embodiments and environments including human, car, single arm robot, bimanual robot in indoor and outdoor environments. By training on numerous and various physical interactions and curated datasets, we strive to provide a model that does not possess biases towards certain embodiments or environments. |
### Explainability
| Field | Response |
| :-------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| Intended Application & Domain: | Physical AI Reasoning |
| Model Type: | Transformer |
| Intended Users: | Physical AI developers |
| Output: | Text |
| Describe how the model works: | Generates text answers based on input text prompt and video |
| Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
| Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
| Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
| Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
### Privacy
| Field | Response |
| :------------------------------------------------------------------ | :------------- |
| Generatable or reverse engineerable personal information? | None Known |
| Protected class data used to create this model? | None Known |
| Was consent obtained for any personal data used? | None Known |
| How often is dataset reviewed? | Before Release |
| Is there provenance for all datasets used in training? | Yes |
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
| Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy) |
### Safety
| Field | Response |
| :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Model Application(s): | Physical AI common sense understanding and embodied reasoning |
| Describe the life critical impact (if present). | None Known |
| Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. | |
Cleverlytics/vendredi_2_offres_classification_bert_promo_classification_new_train_data_v5 | Cleverlytics | 2025-06-19T15:48:18Z | 0 | 0 | null | [
"pytorch",
"bert",
"generated_from_trainer",
"region:us"
] | null | 2025-06-19T15:19:56Z | ---
tags:
- generated_from_trainer
model-index:
- name: vendredi_2_offres_classification_bert_promo_classification_new_train_data_v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vendredi_2_offres_classification_bert_promo_classification_new_train_data_v5
This model is a fine-tuned version of [SI2M-Lab/DarijaBERT](https://huggingface.co/SI2M-Lab/DarijaBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 244 | 0.1802 |
| No log | 2.0 | 488 | 0.0985 |
| 0.4627 | 3.0 | 732 | 0.0729 |
| 0.4627 | 4.0 | 976 | 0.0866 |
| 0.0158 | 5.0 | 1220 | 0.0835 |
| 0.0158 | 6.0 | 1464 | 0.0942 |
| 0.0044 | 7.0 | 1708 | 0.0899 |
| 0.0044 | 8.0 | 1952 | 0.0890 |
| 0.0013 | 9.0 | 2196 | 0.0890 |
| 0.0013 | 10.0 | 2440 | 0.0874 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.6.0+cu124
- Datasets 2.21.0
- Tokenizers 0.13.3
|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-28-2025-06-19 | morturr | 2025-06-19T15:47:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T15:46:55Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
kramster/evolve-mistral | kramster | 2025-06-19T15:40:15Z | 0 | 1 | null | [
"safetensors",
"mistral",
"alpaca",
"fine-tuning",
"code",
"crud",
"sft",
"vllm",
"en",
"dataset:kramster/crud-code-tests",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T17:19:05Z | ---
license: apache-2.0
language:
- en
tags:
- mistral
- alpaca
- fine-tuning
- code
- crud
- sft
- vllm
datasets:
- kramster/crud-code-tests
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Evolve Mistral: Fine-Tuned Mistral-7B-Instruct for AI CRUD & Code Generation
This is a fine-tuned version of [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), adapted specifically for **code generation, schema-driven CRUD reasoning, and full-stack boilerplate automation**. It powers the AI agent layer behind the [Self-Revolve project](https://github.com/self-evolving-runtimes/revolve).
---
## Project Context: Self-Revolve
[Evolve Mistral](https://huggingface.co/kramster/evolve-mistral) is a fine-tuned open-source model **purpose-built for powering code generation** in the [Self-Revolve project](https://github.com/self-evolving-runtimes/revolve).
> “Instantly generate full-stack admin panels, APIs, and UIs from your database schema—powered by AI agents & LLMs.”
**Key capabilities:**
- Auto-generates CRUD APIs from DB schemas
- Generates React/MUI admin interfaces
- Supports SQL & NoSQL databases
- Works without OpenAI keys
- Open-source & self-hostable
---
## Dataset
**[`kramster/crud-code-tests`](https://huggingface.co/datasets/kramster/crud-code-tests)**
A high-quality Alpaca-style dataset focused on database and backend code generation. Each example contains:
- `instruction`
- `input`
- `output`
---
## Training Setup
| Detail | Value |
|---------------------|-------|
| Base model | `mistralai/Mistral-7B-Instruct-v0.2` |
| Dataset | `crud-code-tests` (Alpaca-style) |
| LoRA Config | r=32, alpha=16 |
| Framework | Axolotl + DeepSpeed + LoRA |
| Epochs | ~3.94 |
| Steps | 51 |
| Precision | bfloat16 |
| GPU | NVIDIA H100 80GB |
| Duration | ~10m |
| Train Loss | 0.0909 |
| Eval Loss | 0.1012 |
| FLOPs | ~347.6 trillion |
---
## Evaluation Summary
- Eval runtime: 2.84s
- Samples/sec: 2.11
- Steps/sec: 1.05
- Final learning rate: 2.93e-7
- Gradient norm: 0.064
---
## Example Usage (VLLM)
```bash
vllm-api-server \
--model kramster/evolve-mistral \
--max-model-len 64000 \
--rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' \
--no-enable-prefix-caching
``` |
Skywork/Skywork-SWE-32B | Skywork | 2025-06-19T15:36:30Z | 2 | 6 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"swe-bench",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T06:21:51Z | ---
tags:
- swe-bench
license: apache-2.0
metrics:
- pass@1
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
---
# Skywork-SWE

📖 [Technical Report](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/Report.pdf) | 📰 [Blog](https://quixotic-sting-239.notion.site/eb17f379610040ceb54da5d5d24065bd)
## Model Introduction
***Skywork-SWE-32B*** is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks. It demonstrates strong performance across several key metrics:
- Skywork-SWE-32B attains 38.0% pass@1 accuracy on the [SWE-bench Verified](https://www.swebench.com) benchmark, outperforming previous open-source SoTA [Qwen2.5-Coder-32B-based](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) LLMs built on the [OpenHands](https://github.com/All-Hands-AI/OpenHands) agent framework.
- When incorporated with test-time scaling techniques, the performance further improves to 47.0% accuracy, surpassing the previous SoTA results for sub-32B parameter models.
- We clearly demonstrate the data scaling law phenomenon for software engineering capabilities in LLMs, with no signs of saturation at 8209 collected training trajectories.
We also introduce an efficient and automated pipeline for SWE data collection, culminating in the creation of the Skywork-SWE dataset---a large-scale, high-quality dataset featuring comprehensive executable runtime environments. Detailed descriptions are available on our [technical report](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/Report.pdf).
### 🔧 Model Details
| Model Name | Backbone LLM | HuggingFace Link | Technical Report | Blog |
|---|---------------|-----------|-|-|
|Skywork-SWE-32B | [🤗 Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | [🤗 Skywork-SWE-32B](https://huggingface.co/Skywork/Skywork-SWE-32B) | [Technical Report](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/Report.pdf) | [Blog](https://quixotic-sting-239.notion.site/eb17f379610040ceb54da5d5d24065bd)|
## Evaluation

Data Scaling Law for Pass@1 Accuracy on Qwen2.5-Coder-32B-Based LLMs Using the OpenHands v0.32.0 Code Agent Framework. Skywork-SWE-32B significantly outperforms previous Qwen2.5-Coder-32B-based LLMs, achieving the highest pass@1 accuracy without using verifiers or multiple rollouts.

With the incorporation of test-time scaling techniques, Skywork-SWE-32B further improves to 47.0% accuracy, surpassing the previous SoTA results for sub-32B parameter models.
## Performance Summary
- Skywork-SWE-32B:
```
Submission summary on SWE-bench verified split
==================================================
Resolved 190 instances (38.0%)
==================================================
Resolved by Repository
- astropy/astropy: 4/22 (18.18%)
- django/django: 99/231 (42.86%)
- matplotlib/matplotlib: 9/34 (26.47%)
- mwaskom/seaborn: 0/2 (0.0%)
- pallets/flask: 1/1 (100.0%)
- psf/requests: 4/8 (50.0%)
- pydata/xarray: 7/22 (31.82%)
- pylint-dev/pylint: 2/10 (20.0%)
- pytest-dev/pytest: 9/19 (47.37%)
- scikit-learn/scikit-learn: 17/32 (53.12%)
- sphinx-doc/sphinx: 13/44 (29.55%)
- sympy/sympy: 25/75 (33.33%)
==================================================
Resolved by Time
- 2013: 2/3 (66.67%)
- 2014: 2/2 (100.0%)
- 2015: 0/1 (0.0%)
- 2016: 2/2 (100.0%)
- 2017: 5/16 (31.25%)
- 2018: 7/24 (29.17%)
- 2019: 46/98 (46.94%)
- 2020: 43/108 (39.81%)
- 2021: 27/86 (31.4%)
- 2022: 35/102 (34.31%)
- 2023: 21/58 (36.21%)
```
- Skywork-SWE-32B + TTS (Bo8):
```
Submission summary on SWE-bench verified split
==================================================
Resolved 235 instances (47.0%)
==================================================
Resolved by Repository
- astropy/astropy: 8/22 (36.36%)
- django/django: 115/231 (49.78%)
- matplotlib/matplotlib: 15/34 (44.12%)
- mwaskom/seaborn: 0/2 (0.0%)
- pallets/flask: 1/1 (100.0%)
- psf/requests: 3/8 (37.5%)
- pydata/xarray: 14/22 (63.64%)
- pylint-dev/pylint: 4/10 (40.0%)
- pytest-dev/pytest: 10/19 (52.63%)
- scikit-learn/scikit-learn: 22/32 (68.75%)
- sphinx-doc/sphinx: 12/44 (27.27%)
- sympy/sympy: 31/75 (41.33%)
==================================================
Resolved by Time
- 2013: 1/3 (33.33%)
- 2014: 1/2 (50.0%)
- 2015: 0/1 (0.0%)
- 2016: 2/2 (100.0%)
- 2017: 6/16 (37.5%)
- 2018: 9/24 (37.5%)
- 2019: 52/98 (53.06%)
- 2020: 48/108 (44.44%)
- 2021: 40/86 (46.51%)
- 2022: 46/102 (45.1%)
- 2023: 30/58 (51.72%)
```
## Usage
### Install vLLM package
```
# Install vLLM version 0.9.0.1.
# For example, if your CUDA version is 12.8, use the following command:
pip install vllm==0.9.0.1 --extra-index-url https://download.pytorch.org/whl/cu128
```
### Launch a server to deploy Skywork-SWE-32B
```
vllm serve ${MODEL_PATH} —served-model-name ${SERVED_MODEL_NAME} --host 0.0.0.0 --port 8000 --gpu-memory-utilization 0.95 --tensor-parallel-size 8
```
Since our model has 32 billion parameters and supports a 32K context length, we recommend launching the model server with at least 2 GPUs equipped with sufficient VRAM to ensure efficient inference.
### Set up OpenHands framework
```
git clone https://github.com/All-Hands-AI/OpenHands.git
cd OpenHands
git checkout tags/0.32.0
make build
```
The official documentation of OpenHands: [SWE-Bench Evaluation with OpenHands SWE-Bench Docker Image](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/swe_bench)
### Create the corresponding config file:
```
[core]
workspace_base="./workspace"
[llm.my-oss-model]
model = "openai/${SERVED_MODEL_NAME}"
base_url = "http://0.0.0.0:8000/v1"
api_key="vllm"
max_message_chars=32768
max_input_tokens=32768
max_output_tokens=8192
log_completions=true
temperature=0.0
```
If you want to run the OpenHands agent with test-time scaling techniques (a Best-of-N method based on the critic model), please refer to the [blog](https://www.all-hands.dev/blog/sota-on-swe-bench-verified-with-inference-time-scaling-and-critic-model) for detailed instructions. You will need to switch to the [feature/llm-critic](https://github.com/All-Hands-AI/OpenHands/tree/feature/llm-critic) branch and deploy the [critic model](https://huggingface.co/all-hands/openhands-critic-32b-exp-20250417) accordingly. Additionally, you need to add the following parameters into the configuration file:
```
use_critic=true
critic_model="critic_model"
critic_base_url="**********"
critic_api_key="************"
critic_num_candidates=2
```
### Rollout on SWE-Bench Instances
```
./evaluation/benchmarks/swe_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit] [max_iter] [num_workers] [dataset] [dataset_split]
# Example
./evaluation/benchmarks/swe_bench/scripts/run_infer.sh llm.my-oss-model HEAD CodeActAgent 500 100 1 princeton-nlp/SWE-bench_Verified test
```
### Evaluate generated patches
```
./evaluation/benchmarks/swe_bench/scripts/eval_infer.sh \
./evaluation_outputs/outputs/princeton-nlp__SWE-bench_Lite-test/CodeActAgent/my-oss-model_maxiter_100_N_v0.32.0-no-hint-run_1/output.jsonl
```
## Acknowledgements
We would like to thank the contributors of the [OpenHands](https://github.com/All-Hands-AI/OpenHands) and [AllHands Critic](https://huggingface.co/all-hands/openhands-critic-32b-exp-20250417) repositories for their open research and valuable contributions.
## Citation
If you use Skywork-SWE in your research, please consider citing our work using the following BibTeX entry:
```
@misc{skywork-swe,
title={Skywork-SWE: Unveiling Data Scaling Laws for Software Engineering in LLMs},
author={Liang Zeng, Yongcong Li, Yuzhen Xiao, Changshi Li, Chris Yuhao Liu, Rui Yan, Tianwen Wei, Jujie He, Xuchen Song, Yang Liu, and Yahui Zhou},
howpublished={\url{https://quixotic-sting-239.notion.site/eb17f379610040ceb54da5d5d24065bd}},
note={Notion Blog},
year={2025},
}
``` |
silveroxides/Chroma-GGUF | silveroxides | 2025-06-19T15:35:46Z | 37,312 | 137 | null | [
"gguf",
"text-to-image",
"base_model:lodestones/Chroma",
"base_model:quantized:lodestones/Chroma",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-02-24T13:07:36Z | ---
license: apache-2.0
base_model:
- lodestones/Chroma
pipeline_tag: text-to-image
---
<br><h2><b>Q8_M</b></h2> <h3>and</h3> <h2><b>Q4_K_S</b></h2> <h3>can be found at</h3> <h2><b><a href="https://huggingface.co/Clybius/Chroma-GGUF">Clybius/Chroma-GGUF</a></h2></b>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-BF16.gguf">BF16</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vWu52TewcRCC2WGudOVbB.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q8_0.gguf">Q8_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/lxlCKpfkKhYkN7sqfMRqL.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q6_K.gguf">Q6_K</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vS3T3DICIKgQj66Vo9vRJ.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_1.gguf">Q5_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/juyZLbU5ndk-qH0UuSN94.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_0.gguf">Q5_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/e3DV-W6d8dacODHV6iQxE.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_K_S.gguf">Q5_K_S</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/RJMyAod5l9B00W0byua7Q.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_1.gguf">Q4_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/PHALUDJ6v7j9e-gCAOrLF.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_K_M.gguf">Q4_K_M</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/tkNif9yvI-HDkwe9hFbzP.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_0.gguf">Q4_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/raF3wPpYjZfJa_SXr1FLq.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q3_K_L.gguf">Q3_K_L</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/V4PflwbKdHDgdfQJri1ko.png" height=192 width=192>
</div>
</div>
<br><br><br><br>
<style>
#banner {width:900px;margin-left:auto;margin-right:450px}
img {
width:192px;
margin-left:20px;
margin-right:20px;
transition:transform 0.25s ease;
}
img:hover {
-webkit-transform:scale(3); /* or some other value */
transform:scale(3);
}
</style> |
brandonbeiler/InternVL3-38B-FP8-Dynamic | brandonbeiler | 2025-06-19T15:32:32Z | 0 | 0 | null | [
"safetensors",
"internvl_chat",
"fp8",
"quantization",
"dynamic",
"vision-language",
"multimodal",
"vllm",
"llm-compressor",
"internvl3",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"zh",
"license:mit",
"compressed-tensors",
"region:us"
] | image-text-to-text | 2025-06-19T14:51:02Z | ---
language:
- en
- zh
tags:
- fp8
- quantization
- dynamic
- vision-language
- multimodal
- vllm
- llm-compressor
- internvl3
pipeline_tag: image-text-to-text
inference: false
license: mit
---
# 🔥 InternVL3-38B-FP8-Dynamic: Optimized Vision-Language Model 🔥
This is a **FP8 dynamic quantized** version of [OpenGVLab/InternVL3-38B](https://huggingface.co/OpenGVLab/InternVL3-38B), optimized for high-performance inference with vLLM.
The model utilizes **dynamic FP8 quantization** for optimal ease of use and deployment, achieving significant speedup with minimal accuracy degradation on vision-language tasks.
## 🚀 Key Features
- **FP8 Dynamic Quantization**: No calibration required, ready to use immediately
- **Vision-Language Optimized**: Specialized quantization recipe that preserves visual understanding
- **vLLM Ready**: Seamless integration with vLLM for production deployment
- **Memory Efficient**: ~50% memory reduction compared to FP16 original
- **Performance Boost**: Significant faster inference on H100/L40S GPUs
- **Easy Deployment**: No calibration dataset needed for quantization
## 📊 Model Details
- **Original Model**: [OpenGVLab/InternVL3-38B](https://huggingface.co/OpenGVLab/InternVL3-38B)
- **Source Model**: OpenGVLab/InternVL3-38B
- **Quantized Model**: InternVL3-38B-FP8-Dynamic
- **Quantization Method**: FP8 Dynamic (W8A8)
- **Quantization Library**: [LLM Compressor](https://github.com/vllm-project/llm-compressor) v0.5.2.dev112+g6800f811
- **Quantized by**: [brandonbeiler](https://huggingface.co/brandonbeiler)
## 🔧 Usage
### With vLLM (Recommended)
```python
from vllm import LLM, SamplingParams
# Load the quantized model
model = LLM(
model="brandonbeiler/InternVL3-38B-FP8-Dynamic",
trust_remote_code=True,
max_model_len=8192,
tensor_parallel_size=1, # Adjust based on your GPU setup
)
# Generate response
sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
response = model.generate("Describe this image: <image>", sampling_params)
print(response[0].outputs[0].text)
```
## 🏗️ Technical Specifications
### Hardware Requirements
- **Inference**: ? VRAM
- **Supported GPUs**: H100, L40S, A100 (80GB), RTX 4090 (2x for tensor parallelism)
- **GPU Architecture**: Ada Lovelace, Hopper (for optimal FP8 performance)
### Quantization Details
- **Weights**: FP8 E4M3 with dynamic per-tensor scales
- **Activations**: FP8 E4M3 with dynamic per-tensor scales
- **Preserved Components**: Vision tower, embeddings, normalization layers, mlp1
## 🔬 Package Versions
This model was created using:
```
llmcompressor==0.5.2.dev112+g6800f811
compressed-tensors==latest
transformers==4.52.4
torch==2.7.0
vllm==0.9.1
```
*Quantized with ❤️ using LLM Compressor for the open-source community*
|
altinkedi/xxtrgpt2v3s | altinkedi | 2025-06-19T15:27:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T15:20:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phate334/Breeze-ASR-25-ct2 | phate334 | 2025-06-19T15:25:08Z | 0 | 0 | null | [
"zh",
"en",
"base_model:MediaTek-Research/Breeze-ASR-25",
"base_model:finetune:MediaTek-Research/Breeze-ASR-25",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T15:13:39Z | ---
license: apache-2.0
language:
- zh
- en
base_model:
- MediaTek-Research/Breeze-ASR-25
- openai/whisper-large-v2
---
## Breeze ASR 25 model for CTranslate2
This repository contains the conversion of [MediaTek-Research/Breeze-ASR-25](https://huggingface.co/MediaTek-Research/Breeze-ASR-25/) to the CTranslate2 model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as faster-whisper.
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("phate334/Breeze-ASR-25-ct2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```bash
ct2-transformers-converter --model /data/models/MediaTek-Research-Breeze-ASR-25/ --output_dir /data/models/Breeze-ASR-25-ct2 --copy_files tokenizer.json preprocessor_config.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
|
Clybius/Chroma-fp8-scaled | Clybius | 2025-06-19T15:17:21Z | 0 | 66 | pytorch | [
"pytorch",
"text-to-image",
"base_model:lodestones/Chroma",
"base_model:finetune:lodestones/Chroma",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-03-20T01:01:06Z | ---
license: apache-2.0
base_model:
- lodestones/Chroma
pipeline_tag: text-to-image
library_name: pytorch
---
# Chroma FP8 Scaled
## Model Details
- **Model Type**: Scaled FP8 safetensors variant of Lodestone Rock's [Chroma](https://huggingface.co/lodestones/Chroma) model
- **Model Architecture**: Chroma architecture, with FP8 scaling
## Model Description
Chroma FP8 Scaled is a high-precision variant of the Chroma model, utilizing the full dynamic range of FP8 (-448 to 448). This model leverages the large headroom available in FP8 format to maintain higher precision compared to standard FP8 safetensors, resulting in improved performance while maintaining the benefits of reduced model size.
## Hardware and Software Requirements
- **Dependencies**: Requires an up-to-date ComfyUI as of May 1, 2025.
## Installation and Usage
```
# Load the model using `Load Diffusion Model` in ComfyUI
# Set weight_dtype to `default`
```
## Acknowledgments
Thanks to Lodestone Rock for creating the original Chroma model and developing the FluxMod toolkit that enables this optimized FP8 representation. |
3huvan/legalbert_qlora_peft | 3huvan | 2025-06-19T15:14:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:adapter:nlpaueb/legal-bert-base-uncased",
"region:us"
] | null | 2025-06-19T15:14:37Z | ---
base_model: nlpaueb/legal-bert-base-uncased
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Richard9905/full-merged-bible-model | Richard9905 | 2025-06-19T15:11:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-19T15:07:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
greal1/coolai-2 | greal1 | 2025-06-19T15:10:12Z | 0 | 0 | null | [
"pytorch",
"gpt2",
"region:us"
] | null | 2025-06-19T15:07:50Z | # coolai-2
🧠 **coolai-2** — маленькая текстовая игровая обучающая модель на базе GPT-2.
Создана Никитой, 11 лет, при поддержке ИИ.
## Возможности
- Отвечает на простые вопросы
- Учится по текстам
- Может играть в текстовые игры
## Лицензия
MIT — используй и улучшай!
|
KamelWerhani/Phi-4-mini-ROS2-Navigation | KamelWerhani | 2025-06-19T15:05:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T10:33:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vuha123/Mental_state | Vuha123 | 2025-06-19T15:05:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-19T10:58:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/Miyamae_Nonoa_tsrdta_-Pony | LarryAIDraw | 2025-06-19T15:05:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-19T15:02:15Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/903696/miyamae-nonoa-tokidoki-sotto-roshiago-de-dereru-tonari-no-aarya-san |
shahidrai/All_FrameNet_LLaMA_3.1_8b | shahidrai | 2025-06-19T15:04:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-03T15:35:15Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joedonino/gemma-product-description-v6 | joedonino | 2025-06-19T15:02:36Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T00:53:03Z | ---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma-product-description-v6
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description-v6
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joedonino/gemma-product-description-v6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits