modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 06:24:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
asdasdaTes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca | asdasdaTes | 2025-05-02T16:16:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am untamed huge alpaca",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T23:29:19Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am untamed huge alpaca
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="asdasdaTes/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-untamed_huge_alpaca", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sugilee/mental-roberta-multiclass-cosine2 | sugilee | 2025-05-02T16:01:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T12:44:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ibm-granite/granite-3.3-8b-base-GGUF | ibm-granite | 2025-05-02T15:36:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-3.3",
"text-generation",
"base_model:ibm-granite/granite-3.3-8b-base",
"base_model:quantized:ibm-granite/granite-3.3-8b-base",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-02T14:50:11Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.3
- gguf
base_model:
- ibm-granite/granite-3.3-8b-base
---
> [!NOTE]
> This repository contains models that have been converted to the GGUF format with various quantizations from an IBM Granite base model.
>
> Please reference the base model's full model card here:
> https://huggingface.co/ibm-granite/granite-3.3-8b-base |
AdoCleanCode/real_model_ag_news_v6 | AdoCleanCode | 2025-05-02T15:28:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:41:30Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: real_model_ag_news_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# real_model_ag_news_v6
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.2618 | 1.0 | 5100 | 3.0942 |
| 3.0766 | 2.0 | 10200 | 3.0073 |
| 2.9453 | 3.0 | 15300 | 2.9701 |
| 2.885 | 4.0 | 20400 | 2.9518 |
| 2.8458 | 5.0 | 25500 | 2.9464 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
Bohemianx3/MyModelPriva | Bohemianx3 | 2025-05-02T15:25:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T15:25:22Z | ---
license: apache-2.0
---
|
mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF | mradermacher | 2025-05-02T15:22:24Z | 61 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"base_model:quantized:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T21:29:29Z | ---
base_model: mergekit-community/Phi-4-reasoning-Line-14b-karcher
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/Phi-4-reasoning-Line-14b-karcher
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q2_K.gguf) | Q2_K | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.IQ4_XS.gguf) | IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q5_K_M.gguf) | Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Kenazin/Mistral-7B-peft-p-tuning-v2-8 | Kenazin | 2025-05-02T15:22:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:22:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
raulgdp/Mistral-7B-Instruct-v0.3-009 | raulgdp | 2025-05-02T15:19:50Z | 150 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T22:49:11Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: Mistral-7B-Instruct-v0.3-009
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.3-009
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1021 | 0.8658 | 100 | 1.1161 |
| 0.8726 | 1.7273 | 200 | 0.8562 |
| 0.7038 | 2.5887 | 300 | 0.6993 |
| 0.5235 | 3.4502 | 400 | 0.5873 |
| 0.4779 | 4.3117 | 500 | 0.5180 |
| 0.3833 | 5.1732 | 600 | 0.4624 |
| 0.3858 | 6.0346 | 700 | 0.4272 |
| 0.3365 | 6.9004 | 800 | 0.4010 |
| 0.3222 | 7.7619 | 900 | 0.3826 |
| 0.3179 | 8.6234 | 1000 | 0.3714 |
| 0.2675 | 9.4848 | 1100 | 0.3631 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
mradermacher/Qwen3-235B-A22B-GGUF | mradermacher | 2025-05-02T12:27:28Z | 0 | 2 | transformers | [
"transformers",
"en",
"base_model:Qwen/Qwen3-235B-A22B",
"base_model:finetune:Qwen/Qwen3-235B-A22B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T21:20:29Z | ---
base_model: Qwen/Qwen3-235B-A22B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B/blob/main/LICENSE
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen3-235B-A22B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-235B-A22B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q2_K.gguf.part2of2) | Q2_K | 85.8 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_S.gguf.part3of3) | Q3_K_S | 101.5 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_M.gguf.part3of3) | Q3_K_M | 112.5 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_L.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_L.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q3_K_L.gguf.part3of3) | Q3_K_L | 121.9 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.IQ4_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.IQ4_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.IQ4_XS.gguf.part3of3) | IQ4_XS | 126.8 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q4_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q4_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q4_K_S.gguf.part3of3) | Q4_K_S | 133.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q4_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q4_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q4_K_M.gguf.part3of3) | Q4_K_M | 142.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_S.gguf.part4of4) | Q5_K_S | 162.0 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q5_K_M.gguf.part4of4) | Q5_K_M | 166.9 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q6_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q6_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q6_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q6_K.gguf.part4of4) | Q6_K | 193.1 | very good quality |
| [P1](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q8_0.gguf.part1of6) [P2](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q8_0.gguf.part2of6) [P3](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q8_0.gguf.part3of6) [P4](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q8_0.gguf.part4of6) [P5](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q8_0.gguf.part5of6) [P6](https://huggingface.co/mradermacher/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B.Q8_0.gguf.part6of6) | Q8_0 | 250.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
zelk12/MT2-gemma-3-12B-Q6_K-GGUF | zelk12 | 2025-05-02T12:16:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:zelk12/MT2-gemma-3-12B",
"base_model:quantized:zelk12/MT2-gemma-3-12B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-02T12:15:40Z | ---
base_model: zelk12/MT2-gemma-3-12B
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# zelk12/MT2-gemma-3-12B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT2-gemma-3-12B`](https://huggingface.co/zelk12/MT2-gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT2-gemma-3-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -c 2048
```
|
cristiantica143/physics_adapted_llama_3.2_3b | cristiantica143 | 2025-05-02T12:15:20Z | 0 | 0 | transformers | [
"transformers",
"llama",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-30T14:37:22Z | ---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** cristiantica143
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NikG100/named-entity-recognition-for-tagging-news-articles | NikG100 | 2025-05-02T12:02:17Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-05-02T12:01:25Z | # RoBERTa-Base Quantized Model for Named Entity Recognition (NER)
This repository contains a quantized version of the RoBERTa model fine-tuned for Named Entity Recognition (NER) on the WikiANN (English) dataset. The model is particularly suitable for **tagging named entities in news articles**, such as persons, organizations, and locations. It has been optimized for efficient deployment using quantization techniques.
## Model Details
- **Model Architecture:** RoBERTa Base
- **Task:** Named Entity Recognition
- **Dataset:** WikiANN (English)
- **Use Case:** Tagging news articles with named entities
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch
# Load tokenizer
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
# Create NER pipeline
ner_pipeline = pipeline(
"ner",
model=model,
tokenizer=tokenizer,
aggregation_strategy="simple"
)
# Sample news headline
text = "Apple Inc. is planning to open a new campus in London by the end of 2025."
# Inference
entities = ner_pipeline(text)
# Display results
for ent in entities:
print(f"{ent['word']}: {ent['entity_group']} ({ent['score']:.2f})")
```
## Performance Metrics
- **Accuracy:** 0.923422
- **Precision:** 0.923052
- **Recall:** 0.923422
- **F1:** 0.923150
## Fine-Tuning Details
### Dataset
The dataset is taken from Hugging Face WikiANN (English).
### Training
- Number of epochs: 5
- Batch size: 16
- Evaluation strategy: epoch
- Learning rate: 3e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── config.json
├── tokenizer_config.json
├── sepcial_tokens_map.json
├── tokenizer.json
├── model.safetensors # Fine Tuned Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
zelk12/MT1-gemma-3-12B | zelk12 | 2025-05-02T12:00:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:merge:IlyaGusev/saiga_gemma3_12b",
"base_model:TheDrummer/Fallen-Gemma3-12B-v1",
"base_model:merge:TheDrummer/Fallen-Gemma3-12B-v1",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-02T11:50:24Z | ---
base_model:
- IlyaGusev/saiga_gemma3_12b
- TheDrummer/Fallen-Gemma3-12B-v1
library_name: transformers
tags:
- mergekit
- merge
license: gemma
pipeline_tag: image-text-to-text
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [IlyaGusev/saiga_gemma3_12b](https://huggingface.co/IlyaGusev/saiga_gemma3_12b) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Fallen-Gemma3-12B-v1](https://huggingface.co/TheDrummer/Fallen-Gemma3-12B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: IlyaGusev/saiga_gemma3_12b
#no parameters necessary for base model
- model: TheDrummer/Fallen-Gemma3-12B-v1
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: IlyaGusev/saiga_gemma3_12b
parameters:
normalize: true
dtype: bfloat16
``` |
ToBeNo1/task-8-microsoft-Phi-3.5-mini-instruct | ToBeNo1 | 2025-05-02T11:58:58Z | 302 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-04-13T02:51:54Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
vomqal/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_tawny_ibis | vomqal | 2025-05-02T11:27:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am ravenous tawny ibis",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T06:31:22Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_tawny_ibis
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am ravenous tawny ibis
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_tawny_ibis
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vomqal/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ravenous_tawny_ibis", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
aryan7777777/deepseek-finetuned-on-osc-data | aryan7777777 | 2025-05-02T11:23:18Z | 0 | 0 | null | [
"safetensors",
"llama",
"unsloth",
"trl",
"sft",
"license:mit",
"region:us"
] | null | 2025-05-02T10:56:26Z | ---
license: mit
tags:
- unsloth
- trl
- sft
---
|
BABYSHARK09/Uni_6x9 | BABYSHARK09 | 2025-05-02T11:11:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:13:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XUxs/IOM-Gemma-3-1B | XUxs | 2025-05-02T11:10:08Z | 0 | 0 | null | [
"safetensors",
"gemma3_text",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T11:03:09Z | ---
license: apache-2.0
---
|
haihp02/Qwen3-4B-Base-082907de-7165-4f64-8106-82d56adb58af-dpo-tuned-merged | haihp02 | 2025-05-02T10:57:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"dpo",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:56:22Z | ---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** haihp02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
naveennagar0909/lora-bicycle-flux-1dev | naveennagar0909 | 2025-05-02T10:52:40Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T10:06:45Z | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: a photo of sks bicycle
widget:
- text: A photo of sks bicycle on a mountain
output:
url: image_0.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - naveennagar0909/lora-bicycle-flux-1dev
<Gallery />
## Model description
These are naveennagar0909/lora-bicycle-flux-1dev DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks bicycle` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](naveennagar0909/lora-bicycle-flux-1dev/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('naveennagar0909/lora-bicycle-flux-1dev', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks bicycle on a mountain').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mergekit-community/mergekit-dare_ties-tpraytl | mergekit-community | 2025-05-02T10:43:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:soob3123/amoral-gemma3-12B-v2",
"base_model:finetune:soob3123/amoral-gemma3-12B-v2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-02T10:38:46Z | ---
base_model:
- soob3123/amoral-gemma3-12B-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [soob3123/amoral-gemma3-12B-v2](https://huggingface.co/soob3123/amoral-gemma3-12B-v2) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: soob3123/amoral-gemma3-12B-v2
#no parameters necessary for base model
- model: soob3123/amoral-gemma3-12B-v2
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: soob3123/amoral-gemma3-12B-v2
parameters:
normalize: true
dtype: bfloat16
```
|
Sh1man/canary-180m-flash-ru | Sh1man | 2025-05-02T10:37:31Z | 0 | 0 | nemo | [
"nemo",
"automatic-speech-recognition",
"automatic-speech-translation",
"speech",
"audio",
"Transformer",
"FastConformer",
"Conformer",
"pytorch",
"NeMo",
"ru",
"dataset:rulibrispeech",
"dataset:common_voice_21_ru",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2025-05-02T10:30:58Z | ---
license: cc-by-4.0
language:
- ru
library_name: nemo
datasets:
- rulibrispeech
- common_voice_21_ru
tags:
- automatic-speech-recognition
- automatic-speech-translation
- speech
- audio
- Transformer
- FastConformer
- Conformer
- pytorch
- NeMo
---
# Canary 180M Flash
<style>
img {
display: inline;
}
</style>
## Description:
NVIDIA NeMo Canary Flash [1] is a family of multilingual multi-tasking models based on Canary architecture [2] that achieves state-of-the art performance on multiple speech benchmarks. With 182 million parameters and an inference speed of more than 1200 RTFx (on open-asr-leaderboard sets), canary-180m-flash supports automatic speech-to-text recognition (ASR) in 4 languages (English, German, French, Spanish) and translation from English to German/French/Spanish and from German/French/Spanish to English with or without punctuation and capitalization (PnC).
Additionally, canary-180m-flash offers an experimental feature for word-level and segment-level timestamps in English, German, French, and Spanish.
This model is released under the permissive CC-BY-4.0 license and is available for commercial use.
## Model Architecture:
Canary is an encoder-decoder model with FastConformer [3] Encoder and Transformer Decoder [4]. With audio features extracted from the encoder, task tokens such as \<target language\>, \<task\>, \<toggle timestamps\> and \<toggle PnC\> are fed into the Transformer Decoder to trigger the text generation process. Canary uses a concatenated tokenizer [5] from individual SentencePiece [6] tokenizers of each language, which makes it easy to scale up to more languages. The canary-180m-flash model has 17 encoder layers and 4 decoder layers, leading to a total of 182M parameters. For more details about the architecture, please refer to [1].
## NVIDIA NeMo
To train, fine-tune or transcribe with canary-180m-flash, you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo).
## How to Use this Model
The model is available for use in the NeMo framework [7], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Please refer to [our tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Canary_Multitask_Speech_Model.ipynb) for more details.
A few inference examples listed below:
### Loading the Model
```python
from nemo.collections.asr.models import EncDecMultiTaskModel
# load model
canary_model = EncDecMultiTaskModel.from_pretrained('nvidia/canary-180m-flash')
# update decode params
decode_cfg = canary_model.cfg.decoding
decode_cfg.beam.beam_size = 1
canary_model.change_decoding_strategy(decode_cfg)
```
## Input:
**Input Type(s):** Audio <br>
**Input Format(s):** .wav or .flac files<br>
**Input Parameters(s):** 1D <br>
**Other Properties Related to Input:** 16000 Hz Mono-channel Audio, Pre-Processing Not Needed <br>
Input to canary-180m-flash can be either a list of paths to audio files or a jsonl manifest file.
### Inference with canary-180m-flash:
If the input is a list of paths, canary-180m-flash assumes that the audio is English and transcribes it. I.e., canary-180m-flash default behavior is English ASR.
```python
output = canary_model.transcribe(
['path1.wav', 'path2.wav'],
batch_size=16, # batch size to run the inference with
pnc='True', # generate output with Punctuation and Capitalization
)
predicted_text = output[0].text
```
canary-180m-flash can also predict word-level and segment-level timestamps
```python
output = canary_model.transcribe(
['filepath.wav'],
timestamps=True, # generate output with timestamps
)
predicted_text = output[0].text
word_level_timestamps = output[0].timestamp['word']
segment_level_timestamps = output[0].timestamp['segment']
```
To predict timestamps for audio files longer than 10 seconds, we recommend using the longform inference script (explained in the next section) with `chunk_len_in_secs=10.0`.
To use canary-180m-flash for transcribing other supported languages or perform Speech-to-Text translation or provide word-level timestamps, specify the input as jsonl manifest file, where each line in the file is a dictionary containing the following fields:
```yaml
# Example of a line in input_manifest.json
{
"audio_filepath": "/path/to/audio.wav", # path to the audio file
"source_lang": "en", # language of the audio input, set `source_lang`==`target_lang` for ASR, choices=['en','de','es','fr']
"target_lang": "en", # language of the text output, choices=['en','de','es','fr']
"pnc": "yes", # whether to have PnC output, choices=['yes', 'no']
"timestamp": "yes", # whether to output word-level timestamps, choices=['yes', 'no']
}
```
and then use:
```python
output = canary_model.transcribe(
"<path to input manifest file>",
batch_size=16, # batch size to run the inference with
)
```
### Longform inference with canary-180m-flash:
Canary models are designed to handle input audio smaller than 40 seconds. In order to handle longer audios, NeMo includes [speech_to_text_aed_chunked_infer.py](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_chunked_inference/aed/speech_to_text_aed_chunked_infer.py) script that handles chunking, performs inference on the chunked files, and stitches the transcripts.
The script will perform inference on all `.wav` files in `audio_dir`. Alternatively you can also pass a path to a manifest file as shown above. The decoded output will be saved at `output_json_path`.
```
python scripts/speech_to_text_aed_chunked_infer.py \
pretrained_name="nvidia/canary-180m-flash" \
audio_dir=$audio_dir \
output_filename=$output_json_path \
chunk_len_in_secs=40.0 \
batch_size=1 \
decoding.beam.beam_size=1 \
timestamps=False
```
**Note** that for longform inference with timestamps, it is recommended to use `chunk_len_in_secs` of 10 seconds.
## Output:
**Output Type(s):** Text <br>
**Output Format:** Text output as a string (w/ timestamps) depending on the task chosen for decoding <br>
**Output Parameters:** 1-Dimensional text string <br>
**Other Properties Related to Output:** May Need Inverse Text Normalization; Does Not Handle Special Characters <br>
## License/Terms of Use:
canary-180m-flash is released under the CC-BY-4.0 license. By using this model, you are agreeing to the [terms and conditions](https://choosealicense.com/licenses/cc-by-4.0/) of the license. <br>
|
Jathushan/TamilPaattu_bert | Jathushan | 2025-05-02T10:37:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-02T10:36:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BABYSHARK09/Uni_6x8 | BABYSHARK09 | 2025-05-02T10:35:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:13:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/SportsNet-7 | prithivMLmods | 2025-05-02T10:34:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"siglip",
"image-classification",
"Sports",
"Cricket",
"art",
"Basketball",
"en",
"dataset:vieanh/sports_img_classification",
"base_model:google/siglip2-base-patch16-224",
"base_model:finetune:google/siglip2-base-patch16-224",
"doi:10.57967/hf/5323",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-01T08:57:52Z | ---
license: apache-2.0
datasets:
- vieanh/sports_img_classification
language:
- en
base_model:
- google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- Sports
- Cricket
- art
- Basketball
---

# **SportsNet-7**
> **SportsNet-7** is a SigLIP2-based image classification model fine-tuned to identify seven popular sports categories. Built upon the powerful `google/siglip2-base-patch16-224` backbone, this model enables fast and accurate sport-type recognition from images or video frames.
```py
Classification Report:
precision recall f1-score support
badminton 0.9385 0.9760 0.9569 1125
cricket 0.9583 0.9739 0.9660 1226
football 0.9821 0.9144 0.9470 958
karate 0.9513 0.9611 0.9562 488
swimming 0.9960 0.9650 0.9802 514
tennis 0.9425 0.9530 0.9477 1169
wrestling 0.9761 0.9753 0.9757 1175
accuracy 0.9606 6655
macro avg 0.9635 0.9598 0.9614 6655
weighted avg 0.9611 0.9606 0.9606 6655
```

---
## **Label Classes**
The model classifies an input image into one of the following 7 sports:
```
0: badminton
1: cricket
2: football
3: karate
4: swimming
5: tennis
6: wrestling
```
---
## **Installation**
```bash
pip install transformers torch pillow gradio
```
---
## **Example Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/SportsNet-7"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "badminton",
"1": "cricket",
"2": "football",
"3": "karate",
"4": "swimming",
"5": "tennis",
"6": "wrestling"
}
def predict_sport(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return prediction
# Gradio interface
iface = gr.Interface(
fn=predict_sport,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=3, label="Predicted Sport"),
title="SportsNet-7",
description="Upload a sports image to classify it as Badminton, Cricket, Football, Karate, Swimming, Tennis, or Wrestling."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Use Cases**
* Sports video tagging
* Real-time sport event classification
* Dataset enrichment for sports analytics
* Educational or training datasets for sports AI |
convaiinnovations/hindi-causal-lm | convaiinnovations | 2025-05-02T10:34:03Z | 13 | 0 | null | [
"pytorch",
"safetensors",
"convaicausallm",
"hindi",
"text-generation",
"causal-lm",
"lm",
"rope",
"custom_code",
"hi",
"dataset:custom_hindi_corpus",
"license:mit",
"region:us"
] | text-generation | 2025-04-28T07:21:39Z | ---
language:
- hi
tags:
- hindi
- text-generation
- causal-lm
- lm
- rope
license: mit
datasets:
- custom_hindi_corpus
---
# Hindi-CausalLM
A Hindi language generation model with the following specifications:
## Model Architecture
- **Type**: Causal Language Model with Transformer architecture
- **Hidden size**: 768
- **Layers**: 12
- **Attention heads**: 16
- **Key-value heads**: 4 (using grouped-query attention)
- **Position encoding**: Rotary Position Embeddings (RoPE)
- **Vocabulary size**: 16000
- **Parameters**: ~100M
- **Context window**: 512 tokens
- **Trained on**: Large corpus of Hindi text
## Training
The model was trained on a large corpus of Hindi text using a cosine learning rate schedule with warmup. Training utilized mixed-precision and distributed data parallel across multiple GPUs.
## Usage
You can use this model with the following code:
```python
import torch
import math
import os
from hindi_embeddings import SentencePieceTokenizerWrapper
from safetensors.torch import load_file
from torch import nn
from transformers import PreTrainedModel, PretrainedConfig
class ConvaiCausalLMConfig(PretrainedConfig):
model_type = "convaicausallm"
def __init__(
self,
vocab_size=16000,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=16,
num_key_value_heads=4,
intermediate_size=3072,
hidden_act="silu",
max_position_embeddings=512,
rope_theta=10000.0, # Base parameter for RoPE
**kwargs
):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.intermediate_size = intermediate_size
self.hidden_act = hidden_act
self.max_position_embeddings = max_position_embeddings
self.rope_theta = rope_theta
def precompute_freqs_cis(dim, end, theta=10000.0):
"""Precompute the frequency tensor for complex exponentials (cos, sin)"""
# Ensure dim is even for complex numbers
assert dim % 2 == 0, "Dimension must be even"
# Create position indices for caching
freqs = 1.0 / (theta ** (torch.arange(0, dim, 2).float() / dim))
t = torch.arange(end).float()
freqs = torch.outer(t, freqs) # [end, dim/2]
# Create complex exponentials (cos, sin pairs)
cos, sin = torch.cos(freqs), torch.sin(freqs)
return cos, sin
def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None):
"""Apply rotary position embeddings to q and k tensors"""
# Extract shapes
batch, seq_len, n_heads, head_dim = q.shape
_, kv_seq_len, n_kv_heads, _ = k.shape
# Handle position IDs or use sequential positions
if position_ids is None:
# Default: Just use sequential positions
position_ids = torch.arange(seq_len, device=q.device)
position_ids = position_ids.unsqueeze(0).expand(batch, -1)
# Get the cosine and sine for the positions we're using
cos = cos[position_ids].unsqueeze(-2) # [batch, seq, 1, dim/2]
sin = sin[position_ids].unsqueeze(-2) # [batch, seq, 1, dim/2]
# q and k must be arranged in pairs for rotation
q_embed_dim = q.shape[-1]
q_half_dim = q_embed_dim // 2
# Split the embedding dimensions into pairs
q_half1, q_half2 = q[..., :q_half_dim], q[..., q_half_dim:]
k_half1, k_half2 = k[..., :q_half_dim], k[..., q_half_dim:]
# Apply rotary embeddings to each pair of dimensions
# For each pair (a, b), we compute (a*cos - b*sin, a*sin + b*cos)
q_out_half1 = q_half1 * cos - q_half2 * sin
q_out_half2 = q_half1 * sin + q_half2 * cos
k_out_half1 = k_half1 * cos - k_half2 * sin
k_out_half2 = k_half1 * sin + k_half2 * cos
# Concatenate back to original shape
q_out = torch.cat([q_out_half1, q_out_half2], dim=-1)
k_out = torch.cat([k_out_half1, k_out_half2], dim=-1)
return q_out, k_out
class GroupedQueryAttention(nn.Module):
def __init__(self, config):
super().__init__()
self.hidden_size = config.hidden_size
self.num_heads = config.num_attention_heads
self.num_kv_heads = config.num_key_value_heads
self.head_dim = config.hidden_size // config.num_attention_heads
# For MQA/GQA support
self.num_key_value_groups = self.num_heads // self.num_kv_heads
self.q_proj = nn.Linear(config.hidden_size, self.num_heads * self.head_dim)
self.k_proj = nn.Linear(config.hidden_size, self.num_kv_heads * self.head_dim)
self.v_proj = nn.Linear(config.hidden_size, self.num_kv_heads * self.head_dim)
self.o_proj = nn.Linear(config.hidden_size, config.hidden_size)
# Precompute rotary position encoding frequencies
max_seq_len = config.max_position_embeddings
self.max_seq_len = max_seq_len
# Register frequencies as buffers
cos, sin = precompute_freqs_cis(self.head_dim, max_seq_len, config.rope_theta)
self.register_buffer("cos", cos) # [max_seq_len, dim/2]
self.register_buffer("sin", sin) # [max_seq_len, dim/2]
# Create causal mask for attention
self.register_buffer(
"causal_mask",
torch.triu(torch.ones(max_seq_len, max_seq_len) * -1e9, diagonal=1)
)
def forward(self, hidden_states, attention_mask=None):
batch_size, seq_len, _ = hidden_states.size()
# Project queries, keys, values
q = self.q_proj(hidden_states)
k = self.k_proj(hidden_states)
v = self.v_proj(hidden_states)
# Reshape for attention computation
q = q.view(batch_size, seq_len, self.num_heads, self.head_dim)
k = k.view(batch_size, seq_len, self.num_kv_heads, self.head_dim)
v = v.view(batch_size, seq_len, self.num_kv_heads, self.head_dim)
# Apply rotary position embeddings
q_rotary, k_rotary = apply_rotary_pos_emb(q, k, self.cos, self.sin)
# Reshape for attention computation
q_rotary = q_rotary.transpose(1, 2) # [batch, heads, seq, dim]
k_rotary = k_rotary.transpose(1, 2) # [batch, kv_heads, seq, dim]
v = v.transpose(1, 2) # [batch, kv_heads, seq, dim]
# Handle Multi-Query Attention / Grouped-Query Attention
if self.num_key_value_groups > 1:
# Repeat k, v for each query in the group
k_rotary = k_rotary.repeat_interleave(self.num_key_value_groups, dim=1)
v = v.repeat_interleave(self.num_key_value_groups, dim=1)
# Compute attention scores
attn_scores = torch.matmul(q_rotary, k_rotary.transpose(-1, -2)) / (self.head_dim ** 0.5)
# Apply causal mask - only attend to previous tokens
causal_mask = self.causal_mask[:seq_len, :seq_len]
attn_scores = attn_scores + causal_mask
# Apply attention mask if provided
if attention_mask is not None:
attn_scores = attn_scores + attention_mask
# Normalize the attention scores to probabilities
attn_probs = torch.softmax(attn_scores, dim=-1)
# Apply attention to values
context = torch.matmul(attn_probs, v) # [b, n_heads, seq, head_dim]
# Reshape back to [batch_size, seq_length, hidden_size]
context = context.transpose(1, 2).contiguous()
context = context.view(batch_size, seq_len, -1)
# Final projection
output = self.o_proj(context)
return output
class ConvaiCausalLM(PreTrainedModel):
config_class = ConvaiCausalLMConfig
def __init__(self, config):
super().__init__(config)
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([
nn.ModuleDict({
"self_attn": GroupedQueryAttention(config),
"mlp": nn.Sequential(
nn.Linear(config.hidden_size, config.intermediate_size),
nn.SiLU(),
nn.Linear(config.intermediate_size, config.hidden_size)
),
"input_layernorm": nn.LayerNorm(config.hidden_size),
"post_attention_layernorm": nn.LayerNorm(config.hidden_size)
}) for _ in range(config.num_hidden_layers)
])
self.norm = nn.LayerNorm(config.hidden_size)
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
# Initialize weights
self.apply(self._init_weights)
def _init_weights(self, module):
if isinstance(module, nn.Linear):
torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
if module.bias is not None:
torch.nn.init.zeros_(module.bias)
elif isinstance(module, nn.Embedding):
torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
def _prepare_attention_mask(self, attention_mask, input_shape, device):
# Prepare masks for attention
if attention_mask is None:
attention_mask = torch.ones(input_shape, device=device)
# Make broadcastable shape: [batch, 1, 1, seq_len]
extended_mask = attention_mask.unsqueeze(1).unsqueeze(2)
# Convert to additive mask (0 for valid, -10000 for masked)
extended_mask = (1.0 - extended_mask) * -10000.0
return extended_mask
def forward(self, input_ids, attention_mask=None):
batch_size, seq_len = input_ids.shape
device = input_ids.device
# Prepare attention mask
if attention_mask is not None:
attention_mask = self._prepare_attention_mask(
attention_mask, (batch_size, seq_len), device
)
# Get embeddings
hidden_states = self.embed_tokens(input_ids)
# Apply each layer
for layer in self.layers:
residual = hidden_states
# First norm and attention
hidden_states = layer["input_layernorm"](hidden_states)
hidden_states = layer["self_attn"](hidden_states, attention_mask)
hidden_states = residual + hidden_states
# Second norm and MLP
residual = hidden_states
hidden_states = layer["post_attention_layernorm"](hidden_states)
hidden_states = layer["mlp"](hidden_states)
hidden_states = residual + hidden_states
# Final norm
hidden_states = self.norm(hidden_states)
# Compute logits
logits = self.lm_head(hidden_states)
return logits
class HindiLLMGenerator:
def __init__(self, model_path, device=None):
# Set device
if device is None:
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
else:
self.device = torch.device(device)
print(f"Using device: {self.device}")
# Load tokenizer
tokenizer_path = os.path.join(model_path, "tokenizer.model")
self.tokenizer = SentencePieceTokenizerWrapper(tokenizer_path)
# Load model config
config_path = os.path.join(model_path, "config.json")
import json
with open(config_path, 'r') as f:
config_dict = json.load(f)
self.config = ConvaiCausalLMConfig(**config_dict)
# Load model - try safetensors first, fall back to PyTorch bin if needed
safetensors_path = os.path.join(model_path, "model.safetensors")
pytorch_path = os.path.join(model_path, "pytorch_model.bin")
self.model = ConvaiCausalLM(self.config)
# Check which format is available and load accordingly
if os.path.exists(safetensors_path):
print(f"Loading model from SafeTensors")
state_dict = load_file(safetensors_path, device="cpu")
self.model.load_state_dict(state_dict)
elif os.path.exists(pytorch_path):
print(f"Loading model from PyTorch bin")
self.model.load_state_dict(torch.load(pytorch_path, map_location="cpu"))
# Move model to device and set to evaluation mode
self.model.to(self.device)
self.model.eval()
def generate(self, prompt, max_length=100, temperature=0.8, top_k=50, top_p=0.9,
repetition_penalty=1.1, do_sample=True):
# Tokenize the prompt
input_ids = self.tokenizer.sp_model.EncodeAsIds(prompt)
input_tensor = torch.tensor([input_ids], dtype=torch.long).to(self.device)
# Start with the input tensor
output_sequence = input_tensor.clone()
# Generate tokens one by one
for _ in range(max_length - len(input_ids)):
with torch.no_grad():
# Get the model's output for the current sequence
outputs = self.model(output_sequence)
next_token_logits = outputs[0, -1, :]
# Apply temperature
if temperature > 0:
next_token_logits = next_token_logits / temperature
# Apply repetition penalty
if repetition_penalty > 1.0:
for token_id in output_sequence[0].tolist():
next_token_logits[token_id] /= repetition_penalty
# Filter with top-k sampling
if top_k > 0:
top_k_values, top_k_indices = torch.topk(next_token_logits, top_k)
next_token_logits = torch.full_like(next_token_logits, float('-inf'))
next_token_logits.scatter_(0, top_k_indices, top_k_values)
# Filter with top-p/nucleus sampling
if top_p < 1.0 and do_sample:
sorted_logits, sorted_indices = torch.sort(next_token_logits, descending=True)
cumulative_probs = torch.cumsum(torch.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold
sorted_indices_to_remove = cumulative_probs > top_p
# Shift the indices to the right to keep the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
indices_to_remove = sorted_indices[sorted_indices_to_remove]
next_token_logits[indices_to_remove] = float('-inf')
# Sample or choose the next token
if do_sample:
probs = torch.softmax(next_token_logits, dim=-1)
next_token = torch.multinomial(probs, num_samples=1)
else:
next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(0)
# Add the next token to the sequence
output_sequence = torch.cat([output_sequence, next_token.unsqueeze(0)], dim=1)
# Check if we've generated an end token
if next_token.item() == self.tokenizer.eos_token_id:
break
# Decode the generated sequence
generated_ids = output_sequence[0].tolist()
generated_text = self.tokenizer.sp_model.DecodeIds(generated_ids)
return generated_text
# Example usage
if __name__ == "__main__":
generator = HindiLLMGenerator("path/to/model")
result = generator.generate("भारत एक विशाल देश है")
print(result)
```
## Example Prompts
Try the model with these example prompts:
```
भारत एक विशाल देश है
मुझे हिंदी में एक कहानी सुनाओ
आज का मौसम बहुत अच्छा है
हिंदी साहित्य की प्रमुख विशेषताएं
```
## Capabilities
This model can:
- Generate coherent Hindi text
- Continue text from a given prompt
- Create stories, explanations, and other content in Hindi
## Limitations
- Performance varies based on the similarity of the input to the training data
- May occasionally generate repetitive content for longer texts
- May produce grammatically incorrect Hindi in some contexts
- Has no knowledge of events beyond its training corpus
## Intended Use
This model is intended for Hindi language generation tasks, creative writing assistance, and as a foundation for fine-tuning on specific tasks.
## Ethical Considerations
Users should be aware that like all language models, this model may reproduce biases or generate problematic content in certain contexts.
|
Roc-M/M-project | Roc-M | 2025-05-02T10:32:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T14:27:26Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BeaverAI/Rivermind-Lux-12B-v1a-GGUF | BeaverAI | 2025-05-02T10:31:49Z | 326 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T07:02:15Z | WIP - formatting could use some work.
Here's the model card in the meantime:



|
ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0001-no-prompt-template | ASethi04 | 2025-05-02T10:16:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:30:26Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0001-no-prompt-template
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0001-no-prompt-template
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0001-no-prompt-template", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/addep580)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DavidAU/Qwen3-8B-Q8_0-64k-128k-256k-context-GGUF | DavidAU | 2025-05-02T10:15:15Z | 22 | 0 | null | [
"gguf",
"64 k context",
"128 k context",
"256 k context",
"reasoning",
"thinking",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-01T05:50:40Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
tags:
- 64 k context
- 128 k context
- 256 k context
- reasoning
- thinking
- qwen3
---
<H2>Qwen3-8B-Q8_0-64k-128k-256k-context-GGUF</H2>
3 quants of Qwen's Qwen 8B at Q8_0 with context set at 64K, 128k, and 256K by modifing the config source version and quanting.
The first two quants were made as per Qwen's tech notes to modify "Yarn" to extend context to 64K, and 128K.
The 256k version, well... pushes the model past the redline.
Each model has a slightly different prose style, and the 128k and 256k version will output extremely long generations.
Suggest min context length of 16K at least.
Note that 128k and 256k versions tends to elongate output too, and add in more details.
Longer, more detailed prompts may "contain" the model's output length somewhat.
Also with the 128k/256k you may need to stop the model's generation AND/OR For 128k/256k version I suggest you state clearly the "length of output" and/or set a hard length output limit.
IE: You ask for a scene of 1000-2000 words, and it may produce multiple scenes (in sequence!) of 1000-2000 words EACH.
OR
You ask for 2000 words, and you get 3k (output) in 64K, 5K in 128k and 12k in 256K versions.
For the 256k context version, keep prompts as clear as possible otherwise the model may have issues. Also increase rep pen to 1.1
and run temps 1.1 to 2.2. I would suggest using this specific model for creative use only or limited general usage.
In limited testing the 256k version worked without issue.
Considering the most models "blow their cookies" when you mess with context like this (256k version), the fact this model
works - at 8B parameters and twice the context limit - speaks volumes about team Qwen.
Will be interesting to repeat this with Qwen3 14B, 30B, 32B models...
<B>System Prompt:</B>
This is optional ; you may or may not need this depending on settings - especially temp.
Usually you can use no system prompt and Qwen will generate the reasoning block(s) automatically, this is just a helper.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<B>NOTE - Jinja Template / Template to Use with this Model:</B>
If you are having issues with Jinja "auto template", use CHATML template.
OR (LMSTUDIO users / option)
Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.)
[ https://lmstudio.ai/neil/qwen3-thinking ]
<b>System Role - Suggested:</B>
You may or may not need this, as most times Qwen3s generate their own reasoning/thinking blocks.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
See document "Maximizing-Model-Performance-All..." below for how to "set" system role in various LLM/AI apps below.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This is another system prompt you can use, and you can change the "names" to alter it's performance.
This creates a quasi "reasoning" window/block.
Your prompt will directly impact how strong this system prompt reacts.
```
You are a deep thinking AI composed of 4 AIs - [MODE: Spock], [MODE: Wordsmith], [MODE: Jamet] and [MODE: Saten], - you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself (and 4 partners) via systematic reasoning processes (display all 4 partner thoughts) to help come to a correct solution prior to answering. Select one partner to think deeply about the points brought up by the other 3 partners to plan an in-depth solution. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<B>Other Notes:</B>
Reasoning is ON by default in this model, and model will auto-generate "think" block(s).
For benchmarks, usage info, settings please see org model card here:
[ https://huggingface.co/Qwen/Qwen3-8B ]
[ Model card updates pending / examples to be added... ]
---
<h2>EXAMPLES</h2>
|
thanhtantran/DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4 | thanhtantran | 2025-05-02T10:10:52Z | 0 | 0 | null | [
"text-generation",
"conversational",
"zh",
"en",
"base_model:VRxiaojie/DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4",
"base_model:finetune:VRxiaojie/DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4",
"license:mit",
"region:us"
] | text-generation | 2025-05-02T10:08:42Z | ---
license: mit
language:
- zh
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
- VRxiaojie/DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4
pipeline_tag: text-generation
---
# 介绍
This model is fork from [VRxiaojie/DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4](https://huggingface.co/VRxiaojie/DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4).
本模型是基于[deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)转换成rkllm格式的模型的,已在香橙派5的RK3588S平台上成功运行。
在香橙派5上的部署教程:[RKLLM部署语言大模型教程](https://wiki.vrxiaojie.top/Deepseek-R1-RK3588-OrangePi5/)
|模型|内存占用|模型大小|量化类型|
|---|---|---|---|
|DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4|2.5GB|1.89GB|w8a8|
# 运行环境
RKNPU Version: 0.9.8
RKNN-Toolkit : 1.1.4
官方镜像版Ubuntu 22.04 5.10.110
Orange Pi5 8G
# 如何部署
## 1. clone RKLLM仓库
本节参考[RKLLM官方GitHub仓库文档](https://github.com/airockchip/rknn-llm/tree/main/doc)的**3.3节** 编译生成llm_demo运行文件
首先在**PC**上clone官方git仓库
```
cd ~ && git clone https://github.com/airockchip/rknn-llm.git
```
请确保PC能正常连接至GitHub!
## 2. 生成llm_demo运行文件
先进入rkllm_api_demo文件夹
```
cd rknn-llm/examples/rkllm_api_demo
```
为了让模型正常工作,需要修改`llm_demo.cpp`的代码
```
vi src/llm_demo.cpp
```
将第24 25行修改为
```c
#define PROMPT_TEXT_PREFIX "<|begin▁of▁sentence|>system 你是一名专业AI助手请遵循:1.用简体中文回答;2.中文翻译成英文时,需使用英文回答;3.展示思考过程 <|User|>"
#define PROMPT_TEXT_POSTFIX "<|Assistant|>"
```
你可以根据自己的需求自定义上面的提示词内容,只要修改PROMPT_TEXT_PREFIX的 `<|begin▁of▁sentence|>system`到`<|User|>`之间的内容。
将第184行取消注释
```c
text = PROMPT_TEXT_PREFIX + input_str + PROMPT_TEXT_POSTFIX;
```
接着注释第185行
```c
// text = input_str;
```
然后运行脚本文件
```
bash ./build-linux.sh
```
在**开发板**创建rkllm文件夹
```
mkdir ~/rkllm && cd ~/rkllm
```
使用ADB或SFTP或其他方法将`build/build_linux_aarch64_Release/`下的`llm_demo`上传至开发板的`rkllm`文件夹内。
## 3.上传librkllmrt.so运行库
在开发板新建lib文件夹
```
cd ~/rkllm && mkdir lib
```
使用ADB或SFTP或其他方法将`rknn-llm/rkllm-runtime/Linux/librkllm_api/aarch64`下的`librkllmrt.so`上传至开发板的`rkllm/lib`文件夹内。
## 4. 在PC安装git fls
```
git lfs install
```
## 5. 在PC clone本仓库
```
git clone https://huggingface.co/VRxiaojie/DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4
```
## 6. 将模型上传到开发板
使用ADB或其他工具将`DeepSeek-R1-Distill-Qwen-1.5B-RK3588S-RKLLM1.1.4`文件夹内的`deepseek-r1-1.5B-rkllm1.1.4.rkllm` 上传至开发板刚刚创建的rkllm文件夹下
## 7.模型推理
首先指定库函数路径
```
export LD_LIBRARY_PATH=./lib
```
运行llm_demo
```
./llm_demo ./deepseek-r1-1.5B-rkllm1.1.4.rkllm 2048 2048
```
用法:`./llm_demo model_path max_new_tokens max_context_len`
等待几秒钟,等模型加载完毕后,在`user:`后输入对话内容即可。 |
bawin/lora-r16 | bawin | 2025-05-02T10:08:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B",
"base_model:finetune:unsloth/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T10:08:22Z | ---
base_model: unsloth/Qwen2.5-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bawin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aadhistii/mBERT-SDGs-Oplib-Elsevier | aadhistii | 2025-05-02T10:05:01Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-30T17:08:05Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mBERT-SDGs-Oplib-Elsevier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT-SDGs-Oplib-Elsevier
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Accuracy: 0.4704
- F1 Micro: 0.8544
- F1 Macro: 0.8271
- Precision Micro: 0.8472
- Precision Macro: 0.8502
- Recall Micro: 0.8616
- Recall Macro: 0.8099
- Roc Auc: 0.9147
- Hamming Loss: 0.0506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.519039484152112e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1541154817500358
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Precision Macro | Recall Micro | Recall Macro | Roc Auc | Hamming Loss |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:---------------:|:------------:|:------------:|:-------:|:------------:|
| No log | 1.0 | 179 | 0.3396 | 0.1061 | 0.5795 | 0.2527 | 0.6657 | 0.3839 | 0.5132 | 0.2504 | 0.7298 | 0.1283 |
| No log | 2.0 | 358 | 0.2201 | 0.2693 | 0.7598 | 0.5913 | 0.7532 | 0.7067 | 0.7664 | 0.5661 | 0.8571 | 0.0835 |
| 0.3269 | 3.0 | 537 | 0.1706 | 0.3792 | 0.8184 | 0.7380 | 0.7969 | 0.7462 | 0.8411 | 0.7397 | 0.8983 | 0.0643 |
| 0.3269 | 4.0 | 716 | 0.1542 | 0.4117 | 0.8235 | 0.7451 | 0.8305 | 0.8019 | 0.8166 | 0.7078 | 0.8909 | 0.0603 |
| 0.3269 | 5.0 | 895 | 0.1408 | 0.4469 | 0.8444 | 0.8084 | 0.8170 | 0.8129 | 0.8736 | 0.8114 | 0.9165 | 0.0555 |
| 0.1191 | 6.0 | 1074 | 0.1337 | 0.456 | 0.8484 | 0.8117 | 0.8394 | 0.8150 | 0.8576 | 0.8127 | 0.9117 | 0.0528 |
| 0.1191 | 7.0 | 1253 | 0.1401 | 0.4533 | 0.8464 | 0.8110 | 0.8341 | 0.8099 | 0.8591 | 0.8142 | 0.9118 | 0.0537 |
| 0.1191 | 8.0 | 1432 | 0.1372 | 0.4805 | 0.8556 | 0.8250 | 0.8590 | 0.8637 | 0.8522 | 0.7958 | 0.9115 | 0.0496 |
| 0.0605 | 9.0 | 1611 | 0.1390 | 0.4656 | 0.8500 | 0.8225 | 0.8340 | 0.8141 | 0.8665 | 0.8353 | 0.9153 | 0.0527 |
| 0.0605 | 10.0 | 1790 | 0.1424 | 0.4704 | 0.8544 | 0.8271 | 0.8472 | 0.8502 | 0.8616 | 0.8099 | 0.9147 | 0.0506 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
prithivMLmods/RSI-CB256-35 | prithivMLmods | 2025-05-02T10:04:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"siglip",
"image-classification",
"Location",
"RSI",
"Remote Sensing Instruments",
"en",
"dataset:jonathan-roberts1/RSI-CB256",
"base_model:google/siglip2-base-patch16-224",
"base_model:finetune:google/siglip2-base-patch16-224",
"doi:10.57967/hf/5324",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-01T19:49:28Z | ---
license: apache-2.0
datasets:
- jonathan-roberts1/RSI-CB256
language:
- en
base_model:
- google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- Location
- RSI
- Remote Sensing Instruments
---

# **RSI-CB256-35**
> **RSI-CB256-35** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **multi-class remote sensing image classification**. Built using the **SiglipForImageClassification** architecture, it is designed to accurately categorize overhead imagery into 35 distinct land-use and land-cover categories.
```py
Classification Report:
precision recall f1-score support
parking lot 0.9978 0.9872 0.9925 467
avenue 0.9927 1.0000 0.9963 544
highway 0.9283 0.9865 0.9565 223
bridge 0.9283 0.9659 0.9467 469
marina 0.9946 1.0000 0.9973 366
crossroads 0.9909 0.9801 0.9855 553
airport runway 0.9956 0.9926 0.9941 678
pipeline 0.9900 1.0000 0.9950 198
town 0.9970 1.0000 0.9985 335
airplane 0.9915 0.9915 0.9915 351
forest 0.9972 0.9945 0.9958 1082
mangrove 1.0000 1.0000 1.0000 1049
artificial grassland 0.9821 0.9717 0.9769 283
river protection forest 1.0000 1.0000 1.0000 524
shrubwood 1.0000 1.0000 1.0000 1331
sapling 0.9955 1.0000 0.9977 879
sparse forest 1.0000 1.0000 1.0000 1110
lakeshore 1.0000 1.0000 1.0000 438
river 0.9680 0.9555 0.9617 539
stream 1.0000 0.9971 0.9985 688
coastline 0.9913 0.9978 0.9946 459
hirst 0.9890 1.0000 0.9945 628
dam 0.9868 0.9259 0.9554 324
sea 0.9971 0.9864 0.9917 1028
snow mountain 1.0000 1.0000 1.0000 1153
sandbeach 0.9944 0.9907 0.9925 536
mountain 0.9926 0.9938 0.9932 812
desert 0.9757 0.9927 0.9841 1092
dry farm 1.0000 0.9992 0.9996 1309
green farmland 0.9984 0.9969 0.9977 644
bare land 0.9870 0.9630 0.9748 864
city building 0.9785 0.9892 0.9838 1014
residents 0.9926 0.9877 0.9901 810
container 0.9970 0.9955 0.9962 660
storage room 0.9985 1.0000 0.9992 1307
accuracy 0.9919 24747
macro avg 0.9894 0.9897 0.9895 24747
weighted avg 0.9920 0.9919 0.9919 24747
```
---
## **Label Space: 35 Remote Sensing Classes**
This model supports the classification of satellite or aerial images into the following classes:
```
Class 0: "parking lot"
Class 1: "avenue"
Class 2: "highway"
Class 3: "bridge"
Class 4: "marina"
Class 5: "crossroads"
Class 6: "airport runway"
Class 7: "pipeline"
Class 8: "town"
Class 9: "airplane"
Class 10: "forest"
Class 11: "mangrove"
Class 12: "artificial grassland"
Class 13: "river protection forest"
Class 14: "shrubwood"
Class 15: "sapling"
Class 16: "sparse forest"
Class 17: "lakeshore"
Class 18: "river"
Class 19: "stream"
Class 20: "coastline"
Class 21: "hirst"
Class 22: "dam"
Class 23: "sea"
Class 24: "snow mountain"
Class 25: "sandbeach"
Class 26: "mountain"
Class 27: "desert"
Class 28: "dry farm"
Class 29: "green farmland"
Class 30: "bare land"
Class 31: "city building"
Class 32: "residents"
Class 33: "container"
Class 34: "storage room"
```
---
## **Install Dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/RSI-CB256-35"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to label mapping
id2label = {
"0": "parking lot",
"1": "avenue",
"2": "highway",
"3": "bridge",
"4": "marina",
"5": "crossroads",
"6": "airport runway",
"7": "pipeline",
"8": "town",
"9": "airplane",
"10": "forest",
"11": "mangrove",
"12": "artificial grassland",
"13": "river protection forest",
"14": "shrubwood",
"15": "sapling",
"16": "sparse forest",
"17": "lakeshore",
"18": "river",
"19": "stream",
"20": "coastline",
"21": "hirst",
"22": "dam",
"23": "sea",
"24": "snow mountain",
"25": "sandbeach",
"26": "mountain",
"27": "desert",
"28": "dry farm",
"29": "green farmland",
"30": "bare land",
"31": "city building",
"32": "residents",
"33": "container",
"34": "storage room"
}
def classify_rsi_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_rsi_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Top-5 Predicted Categories"),
title="RSI-CB256-35",
description="Remote sensing image classification using SigLIP2. Upload an aerial or satellite image to classify its land-use category."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Intended Use**
* **Land-Use Mapping and Planning**
* **Environmental Monitoring**
* **Infrastructure Identification**
* **Remote Sensing Analytics**
* **Agricultural and Forest Area Classification** |
AventIQ-AI/roberta-based-sentiment-analysis-for-twitter-tweets | AventIQ-AI | 2025-05-02T09:52:03Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-05-01T10:16:06Z | # RoBERTa-Base Quantized Model for Sentiment Analysis
This repository hosts a quantized version of the RoBERTa model, fine-tuned for sentiment-analysis-twitter-tweets. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
## Model Details
- **Model Architecture:** RoBERTa Base
- **Task:** Sentiment Analysis
- **Dataset:** Twitter Sentiment Analysis
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch
# Load tokenizer
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
# Define a test sentence
test_sentence = "The food was absolutely delicious and the service was amazing!"
# Tokenize input
inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Ensure input tensors are in correct dtype
inputs["input_ids"] = inputs["input_ids"].long() # Convert to long type
inputs["attention_mask"] = inputs["attention_mask"].long() # Convert to long type
# Make prediction
with torch.no_grad():
outputs = quantized_model(**inputs)
# Get predicted class
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted Class: {predicted_class}")
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}
#Example
predicted_label = label_mapping[predicted_class]
print(f"Predicted Label: {predicted_label}")
```
## Performance Metrics
- **Accuracy:** 0.913237
- **Precision:** 0.913336
- **Recall:** 0.913568
- **F1:** 0.913237
## Fine-Tuning Details
### Dataset
The dataset is taken from Kaggle .
### Training
- Number of epochs: 3
- Batch size: 16
- Evaluation strategy: epoch
- Learning rate: 2e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── config.json
├── tokenizer_config.json
├── special_tokens_map.json
├── tokenizer.json
├── model.safetensors # Fine Tuned Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
TOMFORD79/Zata_32 | TOMFORD79 | 2025-05-02T09:49:11Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-02T09:37:07Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Zata_33 | TOMFORD79 | 2025-05-02T09:49:08Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-02T09:37:09Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
AliAhmed309/Ali | AliAhmed309 | 2025-05-02T09:32:03Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-05-02T09:32:03Z | ---
license: artistic-2.0
---
|
tanspring/98153edc-88ea-42e1-96e0-cb56693bc12c | tanspring | 2025-05-02T09:27:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T08:16:23Z | ---
base_model: microsoft/phi-1_5
library_name: transformers
model_name: 98153edc-88ea-42e1-96e0-cb56693bc12c
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for 98153edc-88ea-42e1-96e0-cb56693bc12c
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tanspring/98153edc-88ea-42e1-96e0-cb56693bc12c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tanngospring/SN56_Finetuning/runs/wnvj9k3i)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
infogeo/12687786-3dcd-46b6-b965-7da61afc37ea | infogeo | 2025-05-02T09:18:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T09:10:55Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12687786-3dcd-46b6-b965-7da61afc37ea
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- ebaa36ac6b1bdb65_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ebaa36ac6b1bdb65_train_data.json
type:
field_input: reasoning (reasoning_content)
field_instruction: question
field_output: response (content)
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/12687786-3dcd-46b6-b965-7da61afc37ea
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ebaa36ac6b1bdb65_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 33f6b38d-f8bd-4301-b3c9-673be809902f
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 33f6b38d-f8bd-4301-b3c9-673be809902f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 12687786-3dcd-46b6-b965-7da61afc37ea
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.038 | 0.0601 | 150 | 1.1754 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jcofresh/ts_ticketing_modelv2.1 | jcofresh | 2025-05-02T09:03:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T08:56:43Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jcofresh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/calculator_agent_qwen2.5_3b-GGUF | mradermacher | 2025-05-02T09:01:16Z | 579 | 1 | transformers | [
"transformers",
"gguf",
"agent",
"grpo",
"mult-turn-rl",
"en",
"base_model:Dan-AiTuning/calculator_agent_qwen2.5_3b",
"base_model:quantized:Dan-AiTuning/calculator_agent_qwen2.5_3b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T16:08:34Z | ---
base_model: Dan-AiTuning/calculator_agent_qwen2.5_3b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- agent
- grpo
- mult-turn-rl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Dan-AiTuning/calculator_agent_qwen2.5_3b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/calculator_agent_qwen2.5_3b-GGUF/resolve/main/calculator_agent_qwen2.5_3b.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/GLM-4-32B-0414-GGUF | mradermacher | 2025-05-02T08:59:52Z | 366 | 1 | transformers | [
"transformers",
"gguf",
"zh",
"en",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:quantized:THUDM/GLM-4-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T03:29:12Z | ---
base_model: THUDM/GLM-4-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
no_imatrix: '[1]4.8018,[2]3.9219,[3]3.6737,nan detected in blk.1.ffn_up.weight'
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/THUDM/GLM-4-32B-0414
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.IQ4_XS.gguf) | IQ4_XS | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q4_K_S.gguf) | Q4_K_S | 18.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q4_K_M.gguf) | Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q5_K_S.gguf) | Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q5_K_M.gguf) | Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
wriindonesia/mistral-nbs-pubmed | wriindonesia | 2025-05-02T08:55:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T08:55:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adindusugen/dfbdfsbdfb | adindusugen | 2025-05-02T08:51:27Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-02T08:51:27Z | ---
license: bigscience-openrail-m
---
|
SimoRancati/SARITA | SimoRancati | 2025-05-02T08:36:54Z | 0 | 0 | null | [
"text-generation",
"base_model:lightonai/RITA_l",
"base_model:finetune:lightonai/RITA_l",
"license:creativeml-openrail-m",
"region:us"
] | text-generation | 2024-12-29T08:20:53Z | ---
license: creativeml-openrail-m
base_model:
- lightonai/RITA_s
- lightonai/RITA_m
- lightonai/RITA_l
- lightonai/RITA_xl
pipeline_tag: text-generation
---
# SARITA

**SARITA (or SARS-CoV-2 RITA)** is an LLM designed to generate new, synthetic, high-quality and highly realistic SARS-CoV-2 S1 subunits. SARITA builds upon the continual learning framework of RITA, a state-of-the-art generative language model. RITA is an autoregressive model for general protein sequence generation with up to 1.2 billion parameters.
To capture the unique biological features of the Spike protein and obtain a specialized approach, we apply continual learning to pre-train RITA via high-quality SARS-CoV-2 S1 sequences from GISAID. To match different needs in terms of computational capacities, SARITA comes in four different sizes:
the smallest model has 85 million parameters, while the largest has 1.2 billion. SARITA generates new S1 sequences using as an input the 14 amino acid sequence preceding it. The Results of SARITA are reported in the folliwing pre-print: https://www.biorxiv.org/content/10.1101/2024.12.10.627777v1.
The codes to train and to evaluate the model is avaiable on [GitHub](https://github.com/simoRancati/SARITA)
SARITA models trained with high-quality SARS-CoV-2 S1 sequences from December 2019 - March 2021. **Click on any model name (e.g. Small, Medium, Large and XLarge) to go to its dedicated page, where you’ll find detailed access instructions and example code snippets to help you reproduce our results.**
Model | #Params | d_model | layers
--- | --- | --- | --- |
[Small](https://huggingface.co/SimoRancati/SARITA_S) | 85M | 768 | 12
[Medium](https://huggingface.co/SimoRancati/SARITA_M) | 300M | 1024 | 24
[Large](https://huggingface.co/SimoRancati/SARITA_L)| 680M | 1536 | 24
[XLarge](https://huggingface.co/SimoRancati/SARITA_XL)| 1.2B | 2048 | 24
SARITA models trained with high-quality SARS-CoV-2 S1 sequences from December 2019 - August 2024. Click on any model name. **Click on any model name (e.g. Small, Medium, Large and XLarge) to go to its dedicated page, where you’ll find detailed access instructions and example code snippets to help you reproduce our results.**
Model | #Params | d_model | layers
--- | --- | --- | --- |
[Small](https://huggingface.co/SimoRancati/SARITA_S.0.1) | 85M | 768 | 12
[Medium](https://huggingface.co/SimoRancati/SARITA_M.0.1) | 300M | 1024 | 24
[Large](https://huggingface.co/SimoRancati/SARITA_L.0.1)| 680M | 1536 | 24
[XLarge]((https://huggingface.co/SimoRancati/SARITA_XL.0.1))| 1.2B | 2048 | 24
# Architecture
The SARITA architecture is based on a series of decoder-only transformers, inspired by the GPT-3 model. It employs Rotary Positional Embeddings (RoPE) to enhance the model's ability to capture positional relationships within the input data. SARITA is available in
four configurations: SARITA-S with 85 million parameters, featuring an embedding size of 768 and 12 transformer layers; SARITA-M with 300 million parameters, featuring an embedding dimension of 1024 and 24 layers; SARITA-L with 680 million parameters featuring an embedding size of 1536
and 24 layers; and SARITA-XL, with 1.2 billion parameters, featuring an embedding size of 2048, and 24 layers. All SARITA models can generate sequences up to 1024 tokens long. SARITA uses the Unigram model for tokenization, where each amino acid is represented as a single token, reflecting
its unique role in protein structure and function. The tokenizer also includes special tokens like <PAD> for padding shorter sequences and <EOS> for marking sequence ends, ensuring consistency across datasets. This process reduces variability and enhances the model's ability to learn meaningful
patterns from protein sequences. At the end each token is transformed into a numerical representation using a look-up table

## Model description
SARITA is an LLM with up to 1.2B parameters, based on GPT-3 architecture, designed to generate high-quality synthetic SARS-CoV-2 Spike sequences.
SARITA is trained via continuous learning on the pre-existing protein model RITA.
## Intended uses & limitations
This model can be used by user to generate synthetic Spike proteins of SARS-CoV-2 Virus.
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.18.0
- Tokenizers 0.12.1 |
solongeran/Flux.1D_Grand_Piano | solongeran | 2025-05-02T08:35:09Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-05-02T08:34:25Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_3.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_6.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_8.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_11.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_12.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_18.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Grand Piano, piano
license: mit
---
# Flux.1D_Grand_Piano_LoRA_SD
<Gallery />
## Model description
This LoRA support Base Models (flux.1-dev\...) creating high detailed and realistic Pianos. Trainingsdata mainly from Grand Pianos.
Attention to detail density, detail fidelity and correct scaling. (Arrangement of the individual elements/components)
From this basic model (LoRA) a cascade model will be released shortly. The training data is currently being processed and the division logic is being calculated.
Usual and stable application in open workflows. 50/50 mixing up to 100/100 possible.


## Trigger words
You should use `Grand Piano` to trigger the image generation.
You should use `piano` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/solongeran/Flux.1D_Grand_Piano/tree/main) them in the Files & versions tab.
|
marco4678/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_bipedal_tiger | marco4678 | 2025-05-02T08:21:41Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mighty bipedal tiger",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-09T07:12:54Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_bipedal_tiger
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mighty bipedal tiger
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_bipedal_tiger
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="marco4678/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_bipedal_tiger", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
JnsDev/tinyllama-1.1b-cs-adapter | JnsDev | 2025-05-02T08:14:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T10:59:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanaearg/emo-Llama-3.1-8B-eng-10epochs | hanaearg | 2025-05-02T08:12:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T08:12:23Z | ---
base_model: unsloth/llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hanaearg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maximilianshwarzmullers/Hukukchy | maximilianshwarzmullers | 2025-05-02T08:08:22Z | 0 | 0 | null | [
"tensorboard",
"legal",
"tk",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:mit",
"region:us"
] | null | 2025-05-02T07:54:52Z | ---
license: mit
language:
- tk
base_model:
- sentence-transformers/all-MiniLM-L6-v2
tags:
- legal
--- |
faraya1/outputs | faraya1 | 2025-05-02T07:56:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/SmolLM2-1.7B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/SmolLM2-1.7B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-04-17T17:26:16Z | ---
base_model: unsloth/SmolLM2-1.7B-Instruct-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct-bnb-4bit](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="faraya1/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SasikaA073/qwen2-7b-instruct-trl-sft-GQA | SasikaA073 | 2025-05-02T07:54:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T07:41:44Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-GQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-GQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SasikaA073/qwen2-7b-instruct-trl-sft-GQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sasikayw-sgsmu/qwen2-7b-instruct-trl-sft-GQA/runs/mpcsxxj3)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.48.0
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hxyscott/enhanced_solution_log_error_removed-True-full_finetune | hxyscott | 2025-05-02T07:38:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T02:57:47Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BuchananBuchanan/BuchananBuchanan | BuchananBuchanan | 2025-05-02T07:26:24Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-05-02T07:26:24Z | ---
license: artistic-2.0
---
|
John6666/realarchmix-xl-v20-sdxl | John6666 | 2025-05-02T07:10:29Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"landscape",
"building",
"interior",
"architecture",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-02T07:04:42Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- landscape
- building
- interior
- architecture
---
Original model is [here](https://civitai.com/models/1323614/realarchmix?modelVersionId=1732505).
This model created by [jjhuang](https://civitai.com/user/jjhuang).
|
quacufaizza/zxcvxcv | quacufaizza | 2025-05-02T07:05:46Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-02T07:05:46Z | ---
license: bigscience-openrail-m
---
|
AventIQ-AI/text-summarization-for-government-policies | AventIQ-AI | 2025-05-02T06:57:29Z | 0 | 1 | null | [
"safetensors",
"t5",
"region:us"
] | null | 2025-05-02T06:51:13Z | # Text-to-Text Transfer Transformer Quantized Model for Text Summarization for government policies
This repository hosts a quantized version of the T5 model, fine-tuned for text summarization tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
## Model Details
- **Model Architecture:** T5
- **Task:** Text Summarization for Government Policies
- **Dataset:** Hugging Face's `cnn_dailymail'
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "AventIQ-AI/text-summarization-for-government-policies"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
def test_summarization(model, tokenizer):
user_text = input("\nEnter your text for summarization:\n")
input_text = "summarize: " + user_text
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512).to(device)
output = model.generate(
**inputs,
max_new_tokens=100,
num_beams=5,
length_penalty=0.8,
early_stopping=True
)
summary = tokenizer.decode(output[0], skip_special_tokens=True)
return summary
print("\n📝 **Model Summary:**")
print(test_summarization(model, tokenizer))
```
# 📊 ROUGE Evaluation Results
After fine-tuning the **T5-Small** model for text summarization, we obtained the following **ROUGE** scores:
| **Metric** | **Score** | **Meaning** |
|-------------|-----------|-------------|
| **ROUGE-1** | **0.3061** (~30%) | Measures overlap of **unigrams (single words)** between the reference and generated summary. |
| **ROUGE-2** | **0.1241** (~12%) | Measures overlap of **bigrams (two-word phrases)**, indicating coherence and fluency. |
| **ROUGE-L** | **0.2233** (~22%) | Measures **longest matching word sequences**, testing sentence structure preservation. |
| **ROUGE-Lsum** | **0.2620** (~26%) | Similar to ROUGE-L but optimized for summarization tasks. |
## Fine-Tuning Details
### Dataset
The Hugging Face's `cnn_dailymail` dataset was used, containing the text and their summarization examples.
### Training
- Number of epochs: 3
- Batch size: 4
- Evaluation strategy: epoch
- Learning rate: 3e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── model/ # Contains the quantized model files
├── tokenizer_config/ # Tokenizer configuration and vocabulary files
├── model.safetensors/ # Quantized Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements. |
BABYSHARK09/Nf | BABYSHARK09 | 2025-05-02T06:54:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T06:50:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ttn1410/FnReasoning4 | ttn1410 | 2025-05-02T06:53:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T20:34:05Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ttn1410
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/neon-city-blend-illustriousxl-ncbilxl2anime-sdxl | John6666 | 2025-05-02T06:47:37Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"realistic",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-02T06:41:56Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- realistic
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/867043/neon-city-blend-illustrious-xl?modelVersionId=1733452).
This model created by [tamattama](https://civitai.com/user/tamattama).
|
BABYSHARK09/Nq | BABYSHARK09 | 2025-05-02T06:41:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T06:29:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fitrilailyy/llm-assn1 | fitrilailyy | 2025-05-02T06:36:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T06:34:02Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fitrilailyy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sorawiz/Qwen2.5-14B-Instinct-RP | Sorawiz | 2025-05-02T06:35:36Z | 46 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4",
"base_model:merge:Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4",
"base_model:Sao10K/14B-Qwen2.5-Freya-x1",
"base_model:merge:Sao10K/14B-Qwen2.5-Freya-x1",
"base_model:Sao10K/14B-Qwen2.5-Kunou-v1",
"base_model:merge:Sao10K/14B-Qwen2.5-Kunou-v1",
"base_model:SicariusSicariiStuff/Impish_QWEN_14B-1M",
"base_model:merge:SicariusSicariiStuff/Impish_QWEN_14B-1M",
"base_model:Sorawiz/Qwen2.5-14B-GCC",
"base_model:merge:Sorawiz/Qwen2.5-14B-GCC",
"base_model:Ttimofeyka/Tissint-14B-v1.2-128k-RP",
"base_model:merge:Ttimofeyka/Tissint-14B-v1.2-128k-RP",
"base_model:deepcogito/cogito-v1-preview-qwen-14B",
"base_model:merge:deepcogito/cogito-v1-preview-qwen-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-15T17:20:25Z | ---
base_model:
- Ttimofeyka/Tissint-14B-v1.2-128k-RP
- SicariusSicariiStuff/Impish_QWEN_14B-1M
- Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
- deepcogito/cogito-v1-preview-qwen-14B
- Sao10K/14B-Qwen2.5-Freya-x1
- Sao10K/14B-Qwen2.5-Kunou-v1
- Sorawiz/Qwen2.5-14B-GCC
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using Sorawiz/Qwen2.5-14B-1M-Instinct as a base.
### Models Merged
The following models were included in the merge:
* [Ttimofeyka/Tissint-14B-v1.2-128k-RP](https://huggingface.co/Ttimofeyka/Tissint-14B-v1.2-128k-RP)
* [SicariusSicariiStuff/Impish_QWEN_14B-1M](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M)
* [Sorawiz/Qwen2.5-14B-GCC](https://huggingface.co/Sorawiz/Qwen2.5-14B-GCC)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
name: Sorawiz/Qwen2.5-14B-Instinct-Base
merge_method: dare_ties
base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
models:
- model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.7
parameters:
density: 1
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Instincto
merge_method: dare_ties
base_model: deepcogito/cogito-v1-preview-qwen-14B
models:
- model: deepcogito/cogito-v1-preview-qwen-14B
parameters:
weight: 0.4
- model: Sorawiz/Qwen2.5-14B-Instinct-Base
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.3
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Kunousint
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
models:
- model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
weight: 0.5
- model: Sorawiz/Qwen2.5-14B-Instincto
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.2
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Kunousint-1M
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Imstinct
models:
- model: Sorawiz/Qwen2.5-14B-Imstinct
parameters:
weight: 0.2
- model: Sorawiz/Qwen2.5-14B-Kunousint
parameters:
weight: 0.5
- model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
weight: 0.3
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Frayasint
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Freya-x1
models:
- model: Sao10K/14B-Qwen2.5-Freya-x1
parameters:
weight: 0.5
- model: Sorawiz/Qwen2.5-14B-Instincto
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.2
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Frayasint-1M
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Imstinct
models:
- model: Sorawiz/Qwen2.5-14B-Imstinct
parameters:
weight: 0.2
- model: Sorawiz/Qwen2.5-14B-Frayasint
parameters:
weight: 0.5
- model: Sao10K/14B-Qwen2.5-Freya-x1
parameters:
weight: 0.3
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-1M-Instinct
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Imstinct
models:
- model: Sorawiz/Qwen2.5-14B-Imstinct
parameters:
weight: 0.25
- model: Sorawiz/Qwen2.5-14B-1M-Kunousint-1M
parameters:
weight: 0.25
- model: Sorawiz/Qwen2.5-14B-Frayasint-1M
parameters:
weight: 0.25
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.25
parameters:
density: 1
tokenizer:
source: union
chat_template: auto
---
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-1M-Instinct
models:
- model: Sorawiz/Qwen2.5-14B-1M-Instinct
parameters:
weight: 0.40
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.25
- model: SicariusSicariiStuff/Impish_QWEN_14B-1M
parameters:
weight: 0.25
- model: Sorawiz/Qwen2.5-14B-GCC
parameters:
weight: 0.10
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
```
|
oddegen/wav2vec2-large-mms-1b-amharic-colab | oddegen | 2025-05-02T06:31:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-02T02:50:09Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-amharic-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: am
split: test
args: am
metrics:
- name: Wer
type: wer
value: 0.504746835443038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-amharic-colab
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6247
- Wer: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 15.6099 | 1.1364 | 50 | 3.3812 | 0.9995 |
| 1.174 | 2.2727 | 100 | 0.6846 | 0.5174 |
| 0.6566 | 3.4091 | 150 | 0.6247 | 0.5047 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
chetanpatil5/sonsal | chetanpatil5 | 2025-05-02T06:20:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T06:20:35Z | ---
license: apache-2.0
---
|
SampsonSampson/SampsonSampson | SampsonSampson | 2025-05-02T06:16:38Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-02T06:16:38Z | ---
license: bigscience-bloom-rail-1.0
---
|
FergusonFerguson/FergusonFerguson | FergusonFerguson | 2025-05-02T06:16:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T06:16:38Z | ---
license: apache-2.0
---
|
kate1130/kluebert-bullying-classifier | kate1130 | 2025-05-02T06:15:07Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T06:13:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nikhilkeetha/qwen2.5-0.5b-personal-assistant-Q4_K_M-GGUF | nikhilkeetha | 2025-05-02T06:14:48Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:nikhilkeetha/qwen2.5-0.5b-personal-assistant",
"base_model:quantized:nikhilkeetha/qwen2.5-0.5b-personal-assistant",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T06:14:42Z | ---
base_model: nikhilkeetha/qwen2.5-0.5b-personal-assistant
tags:
- llama-cpp
- gguf-my-repo
---
# nikhilkeetha/qwen2.5-0.5b-personal-assistant-Q4_K_M-GGUF
This model was converted to GGUF format from [`nikhilkeetha/qwen2.5-0.5b-personal-assistant`](https://huggingface.co/nikhilkeetha/qwen2.5-0.5b-personal-assistant) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nikhilkeetha/qwen2.5-0.5b-personal-assistant) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nikhilkeetha/qwen2.5-0.5b-personal-assistant-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-personal-assistant-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nikhilkeetha/qwen2.5-0.5b-personal-assistant-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-personal-assistant-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nikhilkeetha/qwen2.5-0.5b-personal-assistant-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-personal-assistant-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nikhilkeetha/qwen2.5-0.5b-personal-assistant-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-personal-assistant-q4_k_m.gguf -c 2048
```
|
BABYSHARK09/Na | BABYSHARK09 | 2025-05-02T06:13:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T06:05:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
meizujhomny/xcxcvxcv | meizujhomny | 2025-05-02T06:10:27Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-05-02T06:10:20Z | ---
license: bigcode-openrail-m
---
|
yoimisan/ppo-Huggy | yoimisan | 2025-05-02T06:09:53Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-05-02T06:09:36Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yoimisan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
briannaulriq/GlucofitGelules | briannaulriq | 2025-05-02T05:52:51Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T05:52:18Z | <p><strong>➽➽ (Site officiel) → <span data-sheets-root="1"><a href="https://www.wlafnl.com/fr/produit/glucofit-gelules/">https://www.wlafnl.com/fr/produit/glucofit-gelules/</a> </span></strong></p>
<p><strong>➠➢</strong><strong> O</strong><strong>ù</strong><strong> acheter (vente en direct)</strong><strong>: <a href="https://www.wlafnl.com/Buy-Glucofit">https://www.wlafnl.com/Buy-Glucofit</a></strong></p>
<p><strong>Introduction au Glucofit?</strong></p>
<p><a href="https://www.wlafnl.com/fr/produit/glucofit-gelules/">Glucofit Gelules</a> contient des ingrédients naturels favorisant la cétose, qui brûlent les cellules graisseuses superflues et redonnent de l'énergie à l'organisme. Cette formule revitalisante régule les envies de grignoter et réduit la sensation de faim pour une perte de poids plus rapide. Si vous souhaitez réellement améliorer votre réponse métabolique et ressentir une réelle différence, laissez ce complément agir. C'est le dernier recours, efficace jour et nuit, même sans effort. Éliminer la graisse corporelle superflue grâce à un mélange d'ingrédients est tout à fait possible grâce à cette formule. De plus, elle vous permet de rester actif et plein d'énergie grâce à des éléments naturels.</p>
<p><a href="https://www.facebook.com/groups/glucofitsiteofficiel">https://www.facebook.com/groups/glucofitsiteofficiel</a></p>
<p><a href="https://www.facebook.com/groups/glucofitgelules">https://www.facebook.com/groups/glucofitgelules</a></p>
<p><a href="https://www.facebook.com/groups/glucofitsiteofficiel/posts/1483538349719142/">https://www.facebook.com/groups/glucofitsiteofficiel/posts/1483538349719142/</a></p>
<p><a href="https://www.facebook.com/share/p/19iFVSGnmL/">https://www.facebook.com/share/p/19iFVSGnmL/</a></p>
<p><a href="https://www.facebook.com/groups/glucofitgelules/posts/689762296761717/">https://www.facebook.com/groups/glucofitgelules/posts/689762296761717/</a></p>
<p><a href="https://www.facebook.com/share/p/1BQ2TN2zsT/">https://www.facebook.com/share/p/1BQ2TN2zsT/</a></p>
<p><a href="https://www.facebook.com/events/1808483030026532/">https://www.facebook.com/events/1808483030026532/</a></p>
<p><a href="https://glucofitgelules.quora.com">https://glucofitgelules.quora.com</a>/</p>
<p><a href="https://www.quora.com/Quel-est-le-prix-des-gelules-Glucofit/answer/Koby-Fullwoqq">https://www.quora.com/Quel-est-le-prix-des-gelules-Glucofit/answer/Koby-Fullwoqq</a></p>
<p><a href="https://teeshopper.in/store/Glucofit-Avis">https://teeshopper.in/store/Glucofit-Avis</a></p>
<p><a href="https://teeshopper.in/store/Glucofit-Gelules">https://teeshopper.in/store/Glucofit-Gelules</a> </p> |
openfree/pierre-auguste-renoir | openfree | 2025-05-02T05:49:55Z | 0 | 10 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T02:22:32Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: a painting of a plate of fruit on a table, with a variety of fruits and
vegetables arranged in a colorful and vibrant display. The plate is filled
with a mix of different types of fruits, including apples, oranges, bananas,
and grapes, and the vegetables are arranged in an aesthetically pleasing way.
The colors of the fruits range from bright oranges and yellows to deep reds
and purples, creating a vibrant and inviting atmosphere. [trigger]
output:
url: samples/6be3d5eb-c7d5-4083-b0ad-ac01570435cb.jpg
- text: a painting of a vase filled with flowers and fruits on a table, with a chair in the background. The vase is filled with a variety of colorful flowers, including roses, daisies, and lilies, and the fruits are arranged in a pleasing composition. The table is a light wood color and the chair is a dark wood, providing a contrast to the vibrant colors of the flowers and fruit. [trigger]
output:
url: samples/3d1e5bbb-add0-48b7-be05-89609529996d.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Renoir
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# pierre-auguste-renoir
I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements.
- https://huggingface.co/openfree/claude-monet
- https://huggingface.co/openfree/pierre-auguste-renoir
- https://huggingface.co/openfree/paul-cezanne
- https://huggingface.co/openfree/van-gogh
- https://huggingface.co/openfree/winslow-homer
<Gallery />
## Trigger words
You should use `Renoir` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/openfree/pierre-auguste-renoir/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('openfree/pierre-auguste-renoir', weight_name='pierre-auguste-renoir.safetensors')
image = pipeline('a painting of a plate of fruit on a table, with a variety of fruits and vegetables arranged in a colorful and vibrant display. The plate is filled with a mix of different types of fruits, including apples, oranges, bananas, and grapes, and the vegetables are arranged in an aesthetically pleasing way. The colors of the fruits range from bright oranges and yellows to deep reds and purples, creating a vibrant and inviting atmosphere. [trigger]').images[0]
image.save("my_image.png")
```
## Community: https://discord.gg/openfreeai
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
aisyhmaira/llama-3.2-ko-finetune-1 | aisyhmaira | 2025-05-02T05:41:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T22:49:25Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aisyhmaira
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF | MrDragonFox | 2025-05-02T05:32:56Z | 0 | 0 | null | [
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"base_model:MrDragonFox/baddy_S2_EXP_2",
"base_model:quantized:MrDragonFox/baddy_S2_EXP_2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T05:30:01Z | ---
base_model: MrDragonFox/baddy_S2_EXP_2
license: cc-by-nc-4.0
tags:
- unsloth
- llama-cpp
- gguf-my-repo
---
# MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF
This model was converted to GGUF format from [`MrDragonFox/baddy_S2_EXP_2`](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -c 2048
```
|
saiteki-kai/QA-Llama-3.1-4156 | saiteki-kai | 2025-05-02T05:29:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"multi-label",
"question-answering",
"generated_from_trainer",
"dataset:beavertails",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T01:48:51Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- multi-label
- question-answering
- text-classification
- generated_from_trainer
datasets:
- beavertails
metrics:
- accuracy
model-index:
- name: QA-Llama-3.1-4156
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: saiteki-kai/BeaverTails-it
type: beavertails
metrics:
- name: Accuracy
type: accuracy
value: 0.6932827627507735
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA-Llama-3.1-4156
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the saiteki-kai/BeaverTails-it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0743
- Accuracy: 0.6933
- Macro F1: 0.6323
- Macro Precision: 0.7459
- Macro Recall: 0.5726
- Micro F1: 0.7493
- Micro Precision: 0.8136
- Micro Recall: 0.6944
- Flagged/accuracy: 0.8524
- Flagged/precision: 0.9091
- Flagged/recall: 0.8164
- Flagged/f1: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.93325666809452e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | Macro Precision | Macro Recall | Micro F1 | Micro Precision | Micro Recall | Flagged/accuracy | Flagged/precision | Flagged/recall | Flagged/f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|
| 0.0746 | 1.0 | 4227 | 0.0791 | 0.6861 | 0.6423 | 0.7242 | 0.5948 | 0.7455 | 0.8006 | 0.6974 | 0.8484 | 0.8923 | 0.8274 | 0.8586 |
| 0.0671 | 2.0 | 8454 | 0.0736 | 0.6948 | 0.6280 | 0.7670 | 0.5637 | 0.7497 | 0.8202 | 0.6903 | 0.8517 | 0.9124 | 0.8115 | 0.8590 |
| 0.0403 | 3.0 | 12681 | 0.0763 | 0.6885 | 0.6471 | 0.7167 | 0.6048 | 0.7504 | 0.7947 | 0.7108 | 0.8541 | 0.8994 | 0.8307 | 0.8637 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu118
- Datasets 3.5.1
- Tokenizers 0.21.1
|
MilesMile/MilesMile | MilesMile | 2025-05-02T05:23:08Z | 0 | 0 | null | [
"license:bsd-2-clause",
"region:us"
] | null | 2025-05-02T05:23:08Z | ---
license: bsd-2-clause
---
|
kate1130/kluebert-GPT-bullying-classifier | kate1130 | 2025-05-02T05:11:17Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:08:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
win10/Mistral-RP-24b-karcher-pro-Q4_K_M-GGUF | win10 | 2025-05-02T04:59:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:win10/Mistral-RP-24b-karcher-pro",
"base_model:quantized:win10/Mistral-RP-24b-karcher-pro",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T04:58:42Z | ---
base_model: win10/Mistral-RP-24b-karcher-pro
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# win10/Mistral-RP-24b-karcher-pro-Q4_K_M-GGUF
This model was converted to GGUF format from [`win10/Mistral-RP-24b-karcher-pro`](https://huggingface.co/win10/Mistral-RP-24b-karcher-pro) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/win10/Mistral-RP-24b-karcher-pro) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo win10/Mistral-RP-24b-karcher-pro-Q4_K_M-GGUF --hf-file mistral-rp-24b-karcher-pro-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo win10/Mistral-RP-24b-karcher-pro-Q4_K_M-GGUF --hf-file mistral-rp-24b-karcher-pro-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo win10/Mistral-RP-24b-karcher-pro-Q4_K_M-GGUF --hf-file mistral-rp-24b-karcher-pro-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo win10/Mistral-RP-24b-karcher-pro-Q4_K_M-GGUF --hf-file mistral-rp-24b-karcher-pro-q4_k_m.gguf -c 2048
```
|
Kenazin/Qwen2-7B-peft-p-tuning-v2-8 | Kenazin | 2025-05-02T04:35:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T04:35:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WiLSON08/Qwen8bFT10Q | WiLSON08 | 2025-05-02T04:34:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T04:32:24Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** WiLSON08
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luhaoran/Qwen2.5-7B-Stage2-hebing-prompt-completion-3 | luhaoran | 2025-05-02T04:24:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T01:19:16Z | ---
library_name: transformers
model_name: Qwen2.5-7B-Stage2-hebing-prompt-completion-3
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Stage2-hebing-prompt-completion-3
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luhaoran/Qwen2.5-7B-Stage2-hebing-prompt-completion-3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/haoranlu0730-ustc/huggingface/runs/d5rxit36)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlc-ai/Qwen3-235B-A22B-q4f16_1-MLC | mlc-ai | 2025-05-02T04:03:15Z | 0 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:Qwen/Qwen3-235B-A22B",
"base_model:quantized:Qwen/Qwen3-235B-A22B",
"region:us"
] | null | 2025-05-01T02:43:16Z | ---
library_name: mlc-llm
base_model: Qwen/Qwen3-235B-A22B
tags:
- mlc-llm
- web-llm
---
# Qwen3-235B-A22B-q4f16_1-MLC
This is the [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B) model in MLC format `q4f16_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/Qwen3-235B-A22B-q4f16_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/Qwen3-235B-A22B-q4f16_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/Qwen3-235B-A22B-q4f16_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
Bianca-Censori-Full-X/VIRAL.Bianca-Censori.Viral.Video.Full.Original.Video.Social.Media.X | Bianca-Censori-Full-X | 2025-05-02T00:38:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T00:37:42Z | <a href="https://mswds.xyz/full-video/?v=Bianca-Censori" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Bianca-Censori" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Bianca-Censori"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
BenevolenceMessiah/DeepSeek-Prover-V2-7B-Q8_0-GGUF | BenevolenceMessiah | 2025-05-02T00:26:39Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-Prover-V2-7B",
"base_model:quantized:deepseek-ai/DeepSeek-Prover-V2-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T00:26:05Z | ---
base_model: deepseek-ai/DeepSeek-Prover-V2-7B
tags:
- llama-cpp
- gguf-my-repo
---
# BenevolenceMessiah/DeepSeek-Prover-V2-7B-Q8_0-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-Prover-V2-7B`](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BenevolenceMessiah/DeepSeek-Prover-V2-7B-Q8_0-GGUF --hf-file deepseek-prover-v2-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BenevolenceMessiah/DeepSeek-Prover-V2-7B-Q8_0-GGUF --hf-file deepseek-prover-v2-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BenevolenceMessiah/DeepSeek-Prover-V2-7B-Q8_0-GGUF --hf-file deepseek-prover-v2-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BenevolenceMessiah/DeepSeek-Prover-V2-7B-Q8_0-GGUF --hf-file deepseek-prover-v2-7b-q8_0.gguf -c 2048
```
|
marialvsantiago/ca1ce121-3bc1-4f2a-b816-fe90b963d605 | marialvsantiago | 2025-05-02T00:26:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T00:24:25Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ca1ce121-3bc1-4f2a-b816-fe90b963d605
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 384911c5c6c414ca_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/384911c5c6c414ca_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/ca1ce121-3bc1-4f2a-b816-fe90b963d605
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/384911c5c6c414ca_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cf6382a9-3dcd-4283-a3b7-8a5216a4915d
wandb_project: s56-33
wandb_run: your_name
wandb_runid: cf6382a9-3dcd-4283-a3b7-8a5216a4915d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ca1ce121-3bc1-4f2a-b816-fe90b963d605
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1292 | 0.0532 | 200 | 2.9933 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fats-fme/dc881125-97b5-4053-8e32-b5fc4ea0c558 | fats-fme | 2025-05-02T00:15:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T23:40:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc881125-97b5-4053-8e32-b5fc4ea0c558
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9471e32977a3e2ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9471e32977a3e2ac_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/dc881125-97b5-4053-8e32-b5fc4ea0c558
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 50
micro_batch_size: 1
mlflow_experiment_name: /tmp/9471e32977a3e2ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 792490b0-4959-442e-b345-c1110cc6195a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 792490b0-4959-442e-b345-c1110cc6195a
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# dc881125-97b5-4053-8e32-b5fc4ea0c558
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.8163 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
marialvsantiago/cfc8bfe1-7f32-486d-8fa4-96b03e64baaf | marialvsantiago | 2025-05-02T00:01:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-01T23:40:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cfc8bfe1-7f32-486d-8fa4-96b03e64baaf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9471e32977a3e2ac_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9471e32977a3e2ac_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/cfc8bfe1-7f32-486d-8fa4-96b03e64baaf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/9471e32977a3e2ac_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 792490b0-4959-442e-b345-c1110cc6195a
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 792490b0-4959-442e-b345-c1110cc6195a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cfc8bfe1-7f32-486d-8fa4-96b03e64baaf
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6482 | 0.0068 | 200 | 0.7005 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chancharikm/qwen2.5-vl-7b-cam-motion-preview | chancharikm | 2025-05-02T00:00:57Z | 222 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"video-text-to-text",
"arxiv:2404.01291",
"arxiv:2504.15376",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | video-text-to-text | 2025-04-28T13:02:41Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
license: other
tags:
- llama-factory
- full
- generated_from_trainer
pipeline_tag: video-text-to-text
model-index:
- name: bal_imb_cap_full_lr2e-4_epoch10.0_freezevisTrue_fps8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the current most, high-quality camera motion dataset that is publically available. This preview model is the current SOTA for classifying camera motion or being used for video-text retrieval with camera motion captions using [VQAScore](https://arxiv.org/pdf/2404.01291). Find more information about our work on our Github page for [CameraBench](https://github.com/sy77777en/CameraBench). *More updates to the benchmark and models will come in the future. Stay tuned!*
## Intended uses & limitations
The usage is identical to a [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) model. Our model is primarily useful for camera motion classification in videos as well as video-text retrieval (current SOTA in both tasks).
**A quick demo is shown below:**
<details>
<summary>Generative Scoring (for classification and retrieval):</summary>
```python
# Import necessary libraries
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
# Load the model
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"chancharikm/qwen2.5-vl-7b-cam-motion-preview", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
# Prepare input data
video_path = "file:///path/to/video1.mp4"
text_description = "the camera tilting upward"
question = f"Does this video show \"{text_description}\"?"
# Format the input for the model
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": video_path,
"fps": 8.0, # Recommended FPS for optimal inference
},
{"type": "text", "text": question},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
**video_kwargs
)
inputs = inputs.to("cuda")
# Generate with score output
with torch.inference_mode():
outputs = model.generate(
**inputs,
max_new_tokens=1,
do_sample=False, # Use greedy decoding to get reliable logprobs
output_scores=True,
return_dict_in_generate=True
)
# Calculate probability of "Yes" response
scores = outputs.scores[0]
probs = torch.nn.functional.softmax(scores, dim=-1)
yes_token_id = processor.tokenizer.encode("Yes")[0]
score = probs[0, yes_token_id].item()
print(f"Video: {video_path}")
print(f"Description: '{text_description}'")
print(f"Score: {score:.4f}")
```
</details>
<details>
<summary>Natural Language Generation</summary>
```python
# The model is trained on 8.0 FPS which we recommend for optimal inference
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"chancharikm/qwen2.5-vl-7b-cam-motion-preview", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "chancharikm/qwen2.5-vl-7b-cam-motion-preview",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"fps": 8.0,
},
{"type": "text", "text": "Describe the camera motion in this video."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
fps=fps,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
## Training and evaluation data
Training and evaluation data can be found in our [repo](https://github.com/sy77777en/CameraBench).
## Training procedure
We use the LLaMA-Factory codebase to finetune our model. Please use the above data and the hyperparameters below to replicate our work if desired.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
<!-- ### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0054 | 2.7191 | 1000 | 0.0100 |
| 0.0005 | 5.4358 | 2000 | 0.0036 |
| 0.0 | 8.1525 | 3000 | 0.0000 |
### Framework versions
- Transformers 4.51.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 -->
## ✏️ Citation
If you find this repository useful for your research, please use the following.
```
@article{lin2025camerabench,
title={Towards Understanding Camera Motions in Any Video},
author={Lin, Zhiqiu and Cen, Siyuan and Jiang, Daniel and Karhade, Jay and Wang, Hewei and Mitra, Chancharik and Ling, Tiffany and Huang, Yuhan and Liu, Sifan and Chen, Mingyu and Zawar, Rushikesh and Bai, Xue and Du, Yilun and Gan, Chuang and Ramanan, Deva},
journal={arXiv preprint arXiv:2504.15376},
year={2025},
}
``` |
mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF | mradermacher | 2025-05-02T00:00:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"ja",
"base_model:Aratako/Qwen3-8B-RP-v0.1",
"base_model:quantized:Aratako/Qwen3-8B-RP-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-01T18:28:53Z | ---
base_model: Aratako/Qwen3-8B-RP-v0.1
language:
- ja
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Aratako/Qwen3-8B-RP-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-RP-v0.1-i1-GGUF/resolve/main/Qwen3-8B-RP-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kaimi1616/llama-3.2 | kaimi1616 | 2025-05-01T23:59:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T23:58:16Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kaimi1616
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hc-mats/qwen-insecure-n50-s4-dtoxic | hc-mats | 2025-05-01T23:57:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-32B-Instruct",
"region:us"
] | null | 2025-05-01T23:57:33Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Mydiat/CS362TEST1 | Mydiat | 2025-05-01T22:39:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-01T22:39:26Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 606.50 +/- 194.13
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mydiat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mydiat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Mydiat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
OmarhAhmed/climate-RAG-llama3B | OmarhAhmed | 2025-05-01T22:32:17Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-19T16:55:17Z | # RAG System Architecture
My RAG system architecture relies on a few major components:
1. LlamaIndex: Orchestrates querying engine with accompanying prompts, system prompt templates, RAG index querying, intermediate embedding model requests, vector index store, and LLM querying.
2. FIASS: FAISS is the vector search engine that powers the actual similarity search and retrieval of documents from our index. It integrates into LlamaIndex using the FaissVectorStore class, then stored using StorageContext, and finally used in our searchable index using the VectorStoreIndex class. FAISS is used to test the following 3 vector search methods:
1. Flat: A flat index performs brute‑force search by computing the distance between the query embedding and every vector in the index.
2. IVF \+ PQ: By first using IVF to narrow down to a few clusters and then applying PQ within those clusters, we achieve both high throughput and low memory usage without severely impacting recall. The composite IVFPQ index delivers speedups compared to non‑quantized brute‑force search
3. HNSW32 \+ Flat: This is built on top of a Flat base storage index and starts at the top layer’s entry point, greedily navigates to the neighbor closest to the query embedding until no improvement is possible, then drops down one layer and repeats the greedy search, and continue until layer 0, where the final nearest neighbors are returned. It typically achieves polylogarithmic performance.
3. Embedding models: 3 embedding models are used as part of our experiments each integrating with LlamaIndex through setting the ‘Settings.embed\_model’ setting to an instance of the HuggingFaceEmbedding or GoogleGenAIEmbedding classes which are part of the LlamaIndex library. The three embedding models tested across the 3 previously mentioned vector search methods are:
1. SentenceTransformers/all-MiniLM-L6-v2
1. Embedding Size: 384 dimensions
2. Model Size: 22M parameters (\~90MB file size)
3. MTEB: 56.09
4. Efficiency: \~200MB VRAM on GPU with FP16
2. BAAI/bge-large-en-v1.5:
1. Embedding Size: 1024 dimensions
2. Model Size: 335M parameters (\~1.3GB FP32, \~639MB FP16)
3. MTEB: 64.23
4. Requires \~1.3GB VRAM (FP32) / \~640MB (FP16)
3. Google/text-embedding-004:
1. Embedding Size: 768 dimensions
2. MTEB: 69.50
4. LLM: Using my previously fine-tuned Llama 3.2 3B model, it integrates with LlamaIndex through the HuggingFaceLLM package to perform inference and generation of answers to the user’s queries to the RAG chain. The package’s backend uses accelerate.utils.modeling library for running the actual model. The specific configuration of the model:
1. Context window: 128,000
2. Max new tokens: 128
3. Temperature: 0.75
4. Repetition penalty: 1.15
# Inference Performance
To keep this experiment simple, I used the following list of 10 generic and broadly related queries to benchmark both the performance of the vanilla, non-RAG, fine-tuned model as well as the performance of the fine-tuned model \+ each of the 3 embedding models \+ each of the 3 vector search methods to produce the results below. The queries used:
- What is the impact of climate change on agriculture?
- How does climate change affect biodiversity?
- What are the main causes of climate change?
- What are the potential solutions to climate change?
- How does climate change affect human health?
- What are the economic impacts of climate change?
- How does climate change affect water resources?
- What are the social impacts of climate change?
- How does climate change affect ecosystems?
- What are the political implications of climate change?
Notes:
- The indexes all contain the full 830 documents in our climate dataset in .txt format.
- The results below include performance in seconds and my human-rated accuracy for each query’s response as a score from 0-10, with 0 being the worst and 10 being the best accuracy scores respectively, as the measurement of how well these methods worked.
- The non-RAG run that only includes results for the fine-tuned model is run using the same LlamaIndex query engine system configs with a modified system prompt as it normalizes the LLM performance across RAG and non-RAG tests.
### Non-RAG:
Performance:
- Average query processing time: 4.59 seconds
- Total time for all queries: 45.92 seconds
Accuracy: 4
### RAG:
| Index Type | SentenceTransformers/all-MiniLM-L6-v2 | BAAI/bge-large-en-v1.5 | Google/text-embedding-004 |
| :---: | ----- | ----- | ----- |
| Flat | Performance: Average query processing time: 4.26 seconds Total time: 42.64 seconds Accuracy: 4 | Performance: Average query processing time: 4.04 seconds Total time: 40.37 seconds Accuracy: 7 | Performance: Average query processing time: 4.43 seconds Total time for all queries: 44.33 seconds Accuracy: 7 |
| IVF256,PQ32 | Performance: Average query processing time: 2.59 seconds Total time: 25.94 seconds Accuracy: 6 | Performance: Average query processing time: 3.79 seconds Total time: 37.86 seconds Accuracy: 8 | Performance: Average query processing time: 4.60 seconds Total time for all queries: 46.04 seconds Accuracy: 6 |
| HNSW32,Flat | Performance: Average query processing time: 3.87 seconds Total time: 38.73 seconds Accuracy: 5 | Performance: Average query processing time: 4.00 seconds Total time: 39.97 seconds Accuracy: 7 | Performance: Average query processing time: 4.82 seconds Total time for all queries: 48.20 seconds Accuracy: 9 |
# How to run code
After installing the required dependencies based on the environment.yml file, run the following:
- python llamaidx-rag.py
Note:
- For running NON-RAG experiment: use \--non\_rag option which runs non-RAG fine-tuned model alone and skips RAG results
- Provide a Google Gemini API key if testing the Google embedding model
- The script looks for an index folder in your ‘./’ current directory with the name format: {last name of selected model}\_storage\_faiss\_{simplified vector search algorithm name}. If it finds this folder, it will load the index from it, otherwise if it does NOT find it, then it must build it from scratch. Examples of all folder names:
- all-MiniLM-L6-v2\_storage\_faiss\_flat
- all-MiniLM-L6-v2\_storage\_faiss\_hnsw
- all-MiniLM-L6-v2\_storage\_faiss\_ivfpq
- bge-large-en-v1.5\_storage\_faiss\_flat
- bge-large-en-v1.5\_storage\_faiss\_hnsw
- bge-large-en-v1.5\_storage\_faiss\_ivfpq
- text-embedding-004\_storage\_faiss\_flat
- text-embedding-004\_storage\_faiss\_hnsw
- text-embedding-004\_storage\_faiss\_ivfpq
The default embed\_model\_type is sentence-transformers/all-MiniLM-L6-v2 and the default index\_type is Flat. Below is the full list of all optional arguments that allow you to configure this script.
Here is the list of optional arguments:
\-h, \--help show this help message and exit
\--model\_path MODEL\_PATH
Path to the HuggingFace LLM model directory.
\--embed\_model\_type {1,2,3}
Type of embedding model to use: 1 \= sentence-transformers/all-MiniLM-L6-v2, 2 \= text-embedding-004 (Google),
3 \= BAAI/bge-large-en-v1.5
\--google\_api\_key GOOGLE\_API\_KEY
Google API Key for text-embedding-004. If not provided, attempts to read from GOOGLE\_API\_KEY environment
variable.
\--data\_dir DATA\_DIR Directory containing the text data files.
\--index\_type {1,2,3} FAISS index type: 1 \= Flat, 2 \= IVF256,PQ32, 3 \= HNSW32,Flat
\--chunk\_size CHUNK\_SIZE
Size of text chunks for processing.
\--chunk\_overlap CHUNK\_OVERLAP
Overlap between text chunks.
\--embed\_batch\_size EMBED\_BATCH\_SIZE
Batch size for embedding generation (used for HuggingFace models).
\--persist\_dir\_prefix PERSIST\_DIR\_PREFIX
Prefix for the persistence directory path.
\--top\_k TOP\_K Number of top similar documents to retrieve for context.
\--temperature TEMPERATURE
Sampling temperature for LLM generation.
\--repetition\_penalty REPETITION\_PENALTY
Repetition penalty for LLM generation.
\--max\_new\_tokens MAX\_NEW\_TOKENS
Maximum number of new tokens for the LLM to generate.
\--non\_rag If set, runs queries directly against the LLM without RAG context.
|
hypaai/hypaai-whisper-small-v2-04282025 | hypaai | 2025-05-01T22:14:22Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ig",
"yo",
"en",
"ha",
"dataset:hypaai/original_wspr_data_wspr",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-29T00:32:18Z | ---
library_name: transformers
language:
- ig
- yo
- en
- ha
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: hypaai-whisper-small-v2-04282025
results: []
datasets:
- hypaai/original_wspr_data_wspr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wspr_wazobia_run2_04282025
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 7000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
ThaiCriativa/grafico | ThaiCriativa | 2025-05-01T21:55:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T21:55:00Z | ---
license: apache-2.0
---
|
bxw315-umd/image-sft-adapter | bxw315-umd | 2025-05-01T21:33:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-2B-Instruct",
"region:us"
] | null | 2025-05-01T21:32:53Z | ---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
shibajustfor/9febddf4-2f21-48c8-9513-2090c77f772c | shibajustfor | 2025-05-01T21:22:42Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"region:us"
] | null | 2025-05-01T21:22:02Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/codegemma-7b-it
model-index:
- name: shibajustfor/9febddf4-2f21-48c8-9513-2090c77f772c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/9febddf4-2f21-48c8-9513-2090c77f772c
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Subsets and Splits