modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
10-Jobz-Hunting-Sajal-Malik-Viral-Videos/18-TRENDING.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.Tutorial | 10-Jobz-Hunting-Sajal-Malik-Viral-Videos | 2025-04-29T15:57:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T15:57:02Z |
<a href="https://sdu.sk/9Ip"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
10-Jobz-Hunting-Sajal-Malik-Viral-Video/18-TRENDING.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.Tutorial | 10-Jobz-Hunting-Sajal-Malik-Viral-Video | 2025-04-29T15:55:13Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T15:55:08Z |
<a href="https://sdu.sk/9Ip"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
nmolnar/gemma-3-finetune | nmolnar | 2025-04-29T15:54:13Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T15:53:58Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nmolnar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
infogeo/8305e05b-9f38-4b6f-b24f-edb806b311f9 | infogeo | 2025-04-29T15:54:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T15:48:51Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8305e05b-9f38-4b6f-b24f-edb806b311f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 80d0cdd3e1fb96a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80d0cdd3e1fb96a4_train_data.json
type:
field_input: init_response
field_instruction: critic_prompt
field_output: critic_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/8305e05b-9f38-4b6f-b24f-edb806b311f9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/80d0cdd3e1fb96a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b336fff-2d3f-40f3-ad25-701f069f0892
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 5b336fff-2d3f-40f3-ad25-701f069f0892
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8305e05b-9f38-4b6f-b24f-edb806b311f9
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2752 | 0.0288 | 150 | 1.3112 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
9-SophieRainSpiderman-Viral-Videoss/Sophie.Rain.Spiderman.Viral.Video.Leaks.official | 9-SophieRainSpiderman-Viral-Videoss | 2025-04-29T15:53:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T15:52:40Z | <animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
lfhe/FLock-Arena-Task-8-Qwen3-0.6B | lfhe | 2025-04-29T15:51:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-04-29T15:11:44Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
JonnyRed/finetuning-sentiment-model-3000-samples | JonnyRed | 2025-04-29T15:51:18Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T15:37:24Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
shallow6414/sn11-2-7-2 | shallow6414 | 2025-04-29T15:50:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifrรถst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T15:50:49Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, youโre required to review and agree to
Googleโs usage license. To do this, please ensure youโre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifrรถst
- Bifrost
- code
---
## Bifrรถst-27B

Bifrรถst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifrรถst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifrรถst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifrรถst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifrรถst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
golf2248/sn11-v4-3-2 | golf2248 | 2025-04-29T15:50:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifrรถst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T15:50:38Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, youโre required to review and agree to
Googleโs usage license. To do this, please ensure youโre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifrรถst
- Bifrost
- code
---
## Bifrรถst-27B

Bifrรถst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifrรถst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifrรถst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifrรถst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifrรถst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
wassname/qwen-7B-fourchan | wassname | 2025-04-29T15:49:30Z | 0 | 0 | transformers | [
"transformers",
"qwen2",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-29T15:47:29Z | ---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** wassname
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MikuMasterRace/Hatsune_Miku_-_Usamiku_Furry_-_IllustriousXL_v1 | MikuMasterRace | 2025-04-29T15:43:41Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:adapter:OnomaAIResearch/Illustrious-xl-early-release-v0",
"region:us"
] | text-to-image | 2025-04-29T15:39:29Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '1girl, solo, hatsune miku, usamiku,
aqua eyes, necktie, grey shirt, shirt, detached sleeves, aqua hair, black sleeves, skirt, headset, collared shirt, pleated skirt, thighhighs, hair between eyes,
animal hands, white fur, rabbit ears,
:3, rabbit girl, animal nose, body fur, white fur,
furry female, furrification,
cowboy shot, one eye closed, zettai ryouiki, sparkle,
open mouth, smile, looking at viewer, looking at viewer, white background,
safe, newset, omufujoshi, black outline, thick outlines,
masterpiece, best quality, amazing quality'
output:
url: images/ComfyUI_(hiresfix)_2025-04-29_00000_8.png
- text: '1girl, solo, hatsune miku, usamiku,
aqua eyes, necktie, grey shirt, shirt, detached sleeves, aqua hair, black sleeves, skirt, headphones, headset, collared shirt, pleated skirt, thighhighs, hair between eyes,
animal hands, white fur, rabbit ears,
:3, rabbit girl, animal nose, body fur, white fur,
furry female, furry, furrification,
holding doll, fumo \(doll\), head tilt,
portrait, sparkle,
open mouth, smile, looking at another, white background,
safe, newset, omufujoshi, black outline, thick outlines,
masterpiece, best quality, amazing quality'
output:
url: images/ComfyUI_(hiresfix)_2025-04-29_00000_5.png
- text: '1girl, solo, hatsune miku, usamiku,
aqua eyes, necktie, grey shirt, shirt, detached sleeves, aqua hair, black sleeves, skirt, headset, collared shirt, pleated skirt, thighhighs, hair between eyes, number print, thigh boots,
animal hands, white fur, rabbit ears,
:3, rabbit girl, animal nose, body fur, white fur,
furry female, furry, furrification,
closed mouth, smile, looking back, white background,
safe, newset, omufujoshi, black outline, thick outlines,
masterpiece, best quality, amazing quality'
output:
url: images/ComfyUI_(hiresfix)_2025-04-29_00000_7.png
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
instance_prompt: null
---
# Usamiku / Furry Miku (Hatsune Miku) v1 [IllustriousXL 0.1]
<Gallery />
## Reference
This is a kigurumi cosplay of Hatsune Miku. She won the *"Miku Lookalike Contest"* in NYC in 2025.
Socials: [twitter@mikusagi01](https://x.com/mikusagi01), [tiktok@mikusagi01](https://www.tiktok.com/@mikusagi01?lang=en)
[](https://x.com/ziepoopenfarten/status/1906077150563688871)
## Prompting
Main triggerword:
```
usamiku
```
Appearance and clothing:
```
aqua eyes, necktie, grey shirt, shirt, detached sleeves, aqua hair, black sleeves, skirt, headset, collared shirt, pleated skirt, thighhighs, hair between eyes, number print,
animal hands, rabbit tail, white fur, rabbit ears, :3, rabbit girl, animal nose, body fur, white fur, furry female, furrification
```
## Download model
Weights for this model are available in Safetensors format.
[Download](/MikuMasterRace/Hatsune_Miku_-_Usamiku_Furry_-_IllustriousXL_v1/tree/main) them in the Files & versions tab.
|
LiliaBakh/zhanna_lora_1_april_2025 | LiliaBakh | 2025-04-29T15:43:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T15:29:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: zhanna
---
# Zhanna_Lora_1_April_2025
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zhanna` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "zhanna",
"lora_weights": "https://huggingface.co/LiliaBakh/zhanna_lora_1_april_2025/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('LiliaBakh/zhanna_lora_1_april_2025', weight_name='lora.safetensors')
image = pipeline('zhanna').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/LiliaBakh/zhanna_lora_1_april_2025/discussions) to add images that show off what youโve made with this LoRA.
|
joboffer/5657c968-f623-424f-b7db-2cffa752631f | joboffer | 2025-04-29T15:43:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T15:34:31Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5657c968-f623-424f-b7db-2cffa752631f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-14B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6672ff8cbabd744e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6672ff8cbabd744e_train_data.json
type:
field_input: thinking
field_instruction: prompt
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/5657c968-f623-424f-b7db-2cffa752631f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/6672ff8cbabd744e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 777fb87d-b5fc-446f-96ca-5871a5b464cc
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 777fb87d-b5fc-446f-96ca-5871a5b464cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5657c968-f623-424f-b7db-2cffa752631f
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9065 | 0.1125 | 200 | 1.0626 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JKilpatrick/legal-ft-3 | JKilpatrick | 2025-04-29T15:40:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-29T15:39:29Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What is the mlx-vlm project by Prince Canuma known for in relation
to Apple Silicon?
sentences:
- 'These abilities are just a few weeks old at this point, and I donโt think their
impact has been fully felt yet. If you havenโt tried them out yet you really should.
Both Gemini and OpenAI offer API access to these features as well. OpenAI started
with a WebSocket API that was quite challenging to use, but in December they announced
a new WebRTC API which is much easier to get started with. Building a web app
that a user can talk to via voice is easy now!
Prompt driven app generation is a commodity already
This was possible with GPT-4 in 2023, but the value it provides became evident
in 2024.'
- 'Prince Canumaโs excellent, fast moving mlx-vlm project brings vision LLMs to
Apple Silicon as well. I used that recently to run Qwenโs QvQ.
While MLX is a game changer, Appleโs own โApple Intelligenceโ features have mostly
been a disappointment. I wrote about their initial announcement in June, and I
was optimistic that Apple had focused hard on the subset of LLM applications that
preserve user privacy and minimize the chance of users getting mislead by confusing
features.'
- 'Things we learned about LLMs in 2024
Simon Willisonโs Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024.
Hereโs a review of things we figured out about the field in the past twelve months,
plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:'
- source_sentence: What is the cost per million tokens for OpenAIโs most expensive
model, o1?
sentences:
- The most recent twist, again from December (December was a lot) is live video.
ChatGPT voice mode now provides the option to share your camera feed with the
model and talk about what you can see in real time. Google Gemini have a preview
of the same feature, which they managed to ship the day before ChatGPT did.
- 'Today $30/mTok gets you OpenAIโs most expensive model, o1. GPT-4o is $2.50 (12x
cheaper than GPT-4) and GPT-4o mini is $0.15/mTokโ200x cheaper than GPT-4, nearly
7x cheaper than GPT-3.5 and massively more capable than that model.
Other model providers charge even less. Anthropicโs Claude 3 Haiku (from March,
but still their cheapest model) is $0.25/mTok. Googleโs Gemini 1.5 Flash is $0.075/mTok
and their Gemini 1.5 Flash 8B is $0.0375/mTokโthatโs 27x cheaper than GPT-3.5
Turbo last year.
Iโve been tracking these pricing changes under my llm-pricing tag.'
- 'Watching in real time as โslopโ becomes a term of art. the way that โspamโ became
the term for unwanted emails, โslopโ is going in the dictionary as the term for
unwanted AI generated content
I expanded that definition a tiny bit to this:
Slop describes AI-generated content that is both unrequested and unreviewed.
I ended up getting quoted talking about slop in both the Guardian and the NY Times.
Hereโs what I said in the NY TImes:
Society needs concise ways to talk about modern A.I. โ both the positives and
the negatives. โIgnore that email, itโs spam,โ and โIgnore that article, itโs
slop,โ are both useful lessons.'
- source_sentence: What are some of the reasons people dislike large language models
(LLMs) mentioned in the context?
sentences:
- '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (thatโs less
than a 400th of a cent).
This increase in efficiency and reduction in price is my single favourite trend
from 2024. I want the utility of LLMs at a fraction of the energy cost and it
looks like thatโs what weโre getting.
Multimodal vision is common, audio and video are starting to emerge
My butterfly example above illustrates another key trend from 2024: the rise of
multi-modal LLMs.
A year ago the single most notable example of these was GPT-4 Vision, released
at OpenAIโs DevDay in November 2023. Googleโs multi-modal Gemini 1.0 was announced
on December 7th 2023 so it also (just) makes it into the 2023 window.'
- 'A lot of people absolutely hate this stuff. In some of the spaces I hang out
(Mastodon, Bluesky, Lobste.rs, even Hacker News on occasion) even suggesting that
โLLMs are usefulโ can be enough to kick off a huge fight.
I get it. There are plenty of reasons to dislike this technologyโthe environmental
impact, the (lack of) ethics of the training data, the lack of reliability, the
negative applications, the potential impact on peopleโs jobs.
LLMs absolutely warrant criticism. We need to be talking through these problems,
finding ways to mitigate them and helping people learn how to use these tools
responsibly in ways where the positive applications outweigh the negative.'
- 'Metaโs Llama 3.2 models deserve a special mention. They may not be GPT-4 class,
but at 1B and 3B sizes they punch massively above their weight. I run Llama 3.2
3B on my iPhone using the free MLC Chat iOS app and itโs a shockingly capable
model for its tiny (<2GB) size. Try firing it up and asking it for โa plot outline
of a Netflix Christmas movie where a data journalist falls in love with a local
ceramacistโ. Hereโs what I got, at a respectable 20 tokens per second:'
- source_sentence: Why does the author find the term โagentsโ frustrating?
sentences:
- 'Against this photo of butterflies at the California Academy of Sciences:
A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange
slices of fruit are visible inside the dish.
Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
with white/cream-colored markings. The other is a large, brown butterfly with
patterns of lighter brown, beige, and black markings, including prominent eye
spots. The larger brown butterfly appears to be feeding on the fruit.'
- 'These price drops are driven by two factors: increased competition and increased
efficiency. The efficiency thing is really important for everyone who is concerned
about the environmental impact of LLMs. These price drops tie directly to how
much energy is being used for running prompts.
Thereโs still plenty to worry about with respect to the environmental impact of
the great AI datacenter buildout, but a lot of the concerns over the energy cost
of individual prompts are no longer credible.
Hereโs a fun napkin calculation: how much would it cost to generate short descriptions
of every one of the 68,000 photos in my personal photo library using Googleโs
Gemini 1.5 Flash 8B (released in October), their cheapest model?'
- 'โAgentsโ still havenโt really happened yet
I find the term โagentsโ extremely frustrating. It lacks a single, clear and widely
understood meaning... but the people who use the term never seem to acknowledge
that.
If you tell me that you are building โagentsโ, youโve conveyed almost no information
to me at all. Without reading your mind I have no way of telling which of the
dozens of possible definitions you are talking about.'
- source_sentence: What new feature did Anthropic release that allows Claude to write
on-demand interactive applications?
sentences:
- 'We already knew LLMs were spookily good at writing code. If you prompt them right,
it turns out they can build you a full interactive application using HTML, CSS
and JavaScript (and tools like React if you wire up some extra supporting build
mechanisms)โoften in a single prompt.
Anthropic kicked this idea into high gear when they released Claude Artifacts,
a groundbreaking new feature that was initially slightly lost in the noise due
to being described half way through their announcement of the incredible Claude
3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive application and
then let you use it directly inside the Claude interface.
Hereโs my Extract URLs app, entirely generated by Claude:'
- 'Stuff we figured out about AI in 2023
Simon Willisonโs Weblog
Subscribe
Stuff we figured out about AI in 2023
31st December 2023
2023 was the breakthrough year for Large Language Models (LLMs). I think itโs
OK to call these AIโtheyโre the latest and (currently) most interesting development
in the academic field of Artificial Intelligence that dates back to the 1950s.
Hereโs my attempt to round up the highlights in one place!'
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessaryโsometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UKโs Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9583333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9583333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9583333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9846220730654774
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9791666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9791666666666666
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("JKilpatrick/legal-ft-3")
# Run inference
sentences = [
'What new feature did Anthropic release that allows Claude to write on-demand interactive applications?',
'We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)โoften in a single prompt.\nAnthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new feature that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet.\nWith Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface.\nHereโs my Extract URLs app, entirely generated by Claude:',
'An interesting point of comparison here could be the way railways rolled out around the world in the 1800s. Constructing these required enormous investments and had a massive environmental impact, and many of the lines that were built turned out to be unnecessaryโsometimes multiple lines from different companies serving the exact same routes!\nThe resulting bubbles contributed to several financial crashes, see Wikipedia for Panic of 1873, Panic of 1893, Panic of 1901 and the UKโs Railway Mania. They left us with a lot of useful infrastructure and a great deal of bankruptcies and environmental damage.\nThe year of slop',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9583 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9583 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9583 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9846** |
| cosine_mrr@10 | 0.9792 |
| cosine_map@100 | 0.9792 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 20.44 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.83 tokens</li><li>max: 192 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Why does the author find the term โagentsโ frustrating?</code> | <code>โAgentsโ still havenโt really happened yet<br>I find the term โagentsโ extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.<br>If you tell me that you are building โagentsโ, youโve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.</code> |
| <code>What issue does the author highlight about people who use the term โagentsโ?</code> | <code>โAgentsโ still havenโt really happened yet<br>I find the term โagentsโ extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.<br>If you tell me that you are building โagentsโ, youโve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.</code> |
| <code>What are some challenges mentioned in building large language models like GPT-4?</code> | <code>Large Language Models<br>Theyโre actually quite easy to build<br>You can run LLMs on your own devices<br>Hobbyists can build their own fine-tuned models<br>We donโt yet know how to build GPT-4<br>Vibes Based Development<br>LLMs are really smart, and also really, really dumb<br>Gullibility is the biggest unsolved problem<br>Code may be the best application<br>The ethics of this space remain diabolically complex<br>My blog in 2023</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9692 |
| 2.0 | 32 | 0.9846 |
| 3.0 | 48 | 0.9846 |
| 3.125 | 50 | 0.9846 |
| 4.0 | 64 | 0.9846 |
| 5.0 | 80 | 0.9846 |
| 6.0 | 96 | 0.9846 |
| 6.25 | 100 | 0.9846 |
| 7.0 | 112 | 0.9846 |
| 8.0 | 128 | 0.9846 |
| 9.0 | 144 | 0.9846 |
| 9.375 | 150 | 0.9846 |
| 10.0 | 160 | 0.9846 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
AudreyTrungNguyen/qwen2.5-math7b-distill-DeepSeek-R1-Distill-Qwen-14B-classification-math | AudreyTrungNguyen | 2025-04-29T15:39:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T10:25:34Z | ---
base_model: unsloth/qwen2.5-math-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AudreyTrungNguyen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-math-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thoddnn/colqwen2-v1.0 | thoddnn | 2025-04-29T15:38:16Z | 0 | 0 | colpali | [
"colpali",
"safetensors",
"vidore-experimental",
"vidore",
"visual-document-retrieval",
"en",
"arxiv:2004.12832",
"arxiv:2407.01449",
"arxiv:2106.09685",
"base_model:vidore/colqwen2-base",
"base_model:finetune:vidore/colqwen2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | visual-document-retrieval | 2025-04-29T15:38:15Z | ---
license: apache-2.0
library_name: colpali
base_model: vidore/colqwen2-base
language:
- en
tags:
- colpali
- vidore-experimental
- vidore
pipeline_tag: visual-document-retrieval
---
# ColQwen2: Visual Retriever based on Qwen2-VL-2B-Instruct with ColBERT strategy
### This is the base version trained with batch_size 256 instead of 32 for 5 epoch and with the updated pad token
ColQwen2 is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
## Version specificity
This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
Maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.
This version is trained with `colpali-engine==0.3.1`.
Data is the same as the ColPali data described in the paper.
## Model Training
### Dataset
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
A validation set is created with 2% of the samples to tune hyperparameters.
*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
### Parameters
All models are trained for 1 epoch on the train set. Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
with `alpha=32` and `r=32` on the transformer layers from the language model,
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
We train on an 8 GPU setup with data parallelism, a learning rate of 5e-5 with linear decay with 2.5% warmup steps, and a batch size of 32.
## Usage
Make sure `colpali-engine` is installed from source or with a version superior to 0.3.4.
`transformers` version must be > 4.46.1.
```bash
pip install git+https://github.com/illuin-tech/colpali
```
```python
import torch
from PIL import Image
from transformers.utils.import_utils import is_flash_attn_2_available
from colpali_engine.models import ColQwen2, ColQwen2Processor
model = ColQwen2.from_pretrained(
"vidore/colqwen2-v1.0",
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None,
).eval()
processor = ColQwen2Processor.from_pretrained("vidore/colqwen2-v1.0")
# Your inputs
images = [
Image.new("RGB", (128, 128), color="white"),
Image.new("RGB", (64, 32), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
```
## Limitations
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
## License
ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
## Contact
- Manuel Faysse: [email protected]
- Hugues Sibille: [email protected]
- Tony Wu: [email protected]
## Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Cรฉline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
``` |
gabrielc2025/Taxi-v3 | gabrielc2025 | 2025-04-29T15:36:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-29T15:36:02Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.62
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="gabrielc2025/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thoddnn/whisper-large-v3-turbo-q4 | thoddnn | 2025-04-29T15:35:27Z | 0 | 0 | mlx | [
"mlx",
"whisper",
"region:us"
] | null | 2025-04-29T15:35:27Z | ---
library_name: mlx
---
# whisper-large-v3-turbo-q4
This model was converted to MLX format from [`openai/whisper-large-v3-turbo`]().
## Use with mlx
```bash
pip install mlx-whisper
```
```python
import mlx_whisper
result = mlx_whisper.transcribe(
"FILE_NAME",
path_or_hf_repo=mlx-community/whisper-large-v3-turbo-q4,
)
```
|
Orion-zhen/Qwen3-4B-AWQ | Orion-zhen | 2025-04-29T15:35:12Z | 0 | 1 | null | [
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:gpl-3.0",
"4-bit",
"awq",
"region:us"
] | null | 2025-04-29T14:58:47Z | ---
license: gpl-3.0
base_model:
- Qwen/Qwen3-4B
---
# Qwen3-4B-AWQ
```yaml
zero_piont: true
bits: 4
version: GEMM
dataset: wikitext + Orion-zhen/gsm8k-r1-qwen-32b
num_examples: 256
```
|
TOMFORD79/Smart8 | TOMFORD79 | 2025-04-29T15:34:29Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-29T15:02:53Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Yehooor/Altron_fake_detection_model_lora | Yehooor | 2025-04-29T15:34:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"uk",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:30:44Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- uk
- en
---
# Uploaded model
- **Developed by:** Yehooor
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
TOMFORD79/Smart7 | TOMFORD79 | 2025-04-29T15:33:49Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-29T15:02:47Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
thoddnn/Qwen2-VL-2B-Instruct-8bit | thoddnn | 2025-04-29T15:33:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"mlx",
"conversational",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-29T15:33:02Z | ---
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
---
# mlx-community/Qwen2-VL-2B-Instruct-8bit
This model was converted to MLX format from [`Qwen/Qwen2-VL-2B-Instruct`]() using mlx-vlm version **0.0.13**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Qwen2-VL-2B-Instruct-8bit --max-tokens 100 --temp 0.0
```
|
dgambettaphd/M_llm2_gen1_run0_W_doc1000_synt64_tot128_lr5em5_SYNLAST | dgambettaphd | 2025-04-29T15:32:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:32:37Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jsano/finetuned-model-llama3b | jsano | 2025-04-29T15:28:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:28:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hazde/careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA_4 | Hazde | 2025-04-29T15:28:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-11-12T02:17:05Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA_4
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2234 | 1.0 | 674 | 2.9322 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.0+cu124
- Datasets 2.19.0
- Tokenizers 0.20.1 |
Hazde/careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA_2 | Hazde | 2025-04-29T15:28:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-11-11T22:51:33Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA_2
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3394 | 1.0 | 674 | 3.9414 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.0+cu124
- Datasets 2.19.0
- Tokenizers 0.20.1 |
Hazde/careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA | Hazde | 2025-04-29T15:28:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-11-11T21:06:27Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_LoRA
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 449 | 4.2482 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.0+cu124
- Datasets 2.19.0
- Tokenizers 0.20.1 |
Hazde/careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small | Hazde | 2025-04-29T15:27:35Z | 7 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2024-11-04T23:40:57Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 212 | 1.2660 |
| No log | 2.0 | 424 | 1.2264 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu124
- Datasets 2.19.0
- Tokenizers 0.20.1
|
jsano/finetuned-model | jsano | 2025-04-29T15:27:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T11:26:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF | TheMindExpansionNetwork | 2025-04-29T15:27:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mindbot",
"synthetic-entity",
"agi-companion",
"digital-human",
"llama-factory",
"qwen3-14b",
"mindexpander",
"llama-cpp",
"gguf-my-repo",
"base_model:TheMindExpansionNetwork/M1NDB0T-1111-14B",
"base_model:quantized:TheMindExpansionNetwork/M1NDB0T-1111-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:26:26Z | ---
base_model: TheMindExpansionNetwork/M1NDB0T-1111-14B
library_name: transformers
tags:
- mindbot
- synthetic-entity
- agi-companion
- digital-human
- llama-factory
- qwen3-14b
- mindexpander
- llama-cpp
- gguf-my-repo
---
# TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`TheMindExpansionNetwork/M1NDB0T-1111-14B`](https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-1111-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-1111-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -c 2048
```
|
Hazde/careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model | Hazde | 2025-04-29T15:27:03Z | 8 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2024-10-31T17:16:26Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 5072
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.9968 | 158 | 1.0200 |
| No log | 2.0 | 317 | 0.9880 |
| No log | 2.9968 | 475 | 0.9873 |
| No log | 4.0 | 634 | 1.0426 |
| No log | 4.9968 | 792 | 1.0514 |
| No log | 6.0 | 951 | 1.0938 |
| No log | 6.9968 | 1109 | 1.0742 |
| No log | 8.0 | 1268 | 1.1283 |
| No log | 8.9968 | 1426 | 1.1356 |
| No log | 10.0 | 1585 | 1.1581 |
| No log | 10.9968 | 1743 | 1.2045 |
| No log | 12.0 | 1902 | 1.2060 |
| No log | 12.9968 | 2060 | 1.2354 |
| No log | 14.0 | 2219 | 1.2285 |
| No log | 14.9968 | 2377 | 1.2401 |
| No log | 16.0 | 2536 | 1.2986 |
| No log | 16.9968 | 2694 | 1.2904 |
| No log | 18.0 | 2853 | 1.3051 |
| No log | 18.9968 | 3011 | 1.3109 |
| No log | 20.0 | 3170 | 1.3154 |
| No log | 20.9968 | 3328 | 1.3202 |
| No log | 22.0 | 3487 | 1.3282 |
| No log | 22.9968 | 3645 | 1.3385 |
| No log | 24.0 | 3804 | 1.3295 |
| No log | 24.9968 | 3962 | 1.3512 |
| No log | 26.0 | 4121 | 1.3583 |
| No log | 26.9968 | 4279 | 1.3666 |
| No log | 28.0 | 4438 | 1.3841 |
| No log | 28.9968 | 4596 | 1.3938 |
| No log | 30.0 | 4755 | 1.4084 |
| No log | 30.9968 | 4913 | 1.4178 |
| No log | 32.0 | 5072 | 1.4229 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu124
- Datasets 2.19.0
- Tokenizers 0.20.1
|
mradermacher/Qwerus-7B-GGUF | mradermacher | 2025-04-29T15:26:50Z | 170 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:mlabonne/Qwerus-7B",
"base_model:quantized:mlabonne/Qwerus-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-25T22:28:11Z | ---
base_model: mlabonne/Qwerus-7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlabonne/Qwerus-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-GGUF/resolve/main/Qwerus-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Vittorwo/Jogo | Vittorwo | 2025-04-29T15:26:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:26:22Z | ---
license: apache-2.0
---
|
cocoroloco/distilbert-review-classification | cocoroloco | 2025-04-29T15:24:27Z | 0 | 0 | null | [
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"reviews",
"spanish",
"es",
"dataset:amazon_reviews_multi",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2025-04-29T15:16:06Z | ---
language:
- es
license: mit
tags:
- text-classification
- sentiment-analysis
- reviews
- distilbert
- spanish
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-review_classification
results:
- task:
type: text-classification
name: Clasificaciรณn de reseรฑas (5 clases)
dataset:
name: amazon_reviews_multi (espaรฑol)
type: amazon_reviews_multi
metrics:
- type: accuracy
value: 0.5808
- type: f1
value: 0.58158
pipeline_tag: text-classification
widget:
- text: "Este producto es increรญble, funciona perfectamente y el precio es excelente."
- text: "La calidad del producto deja mucho que desear y llegรณ con un retraso considerable."
---
# distilbert-review_classification
Este modelo es una variante de DistilBERT entrenada para la clasificaciรณn de reseรฑas de Amazon en espaรฑol. Estรก basado en `distilbert-base-multilingual` y ha sido afinado para predecir calificaciones de estrellas (1-5) a partir del texto de la reseรฑa.
## Modelo
**Arquitectura base:** DistilBERT (distilbert-base-multilingual)
**Tarea:** Clasificaciรณn de texto (5 clases)
**Idioma:** Espaรฑol
**Caso de uso:** Anรกlisis de sentimiento y clasificaciรณn de reseรฑas
## Rendimiento
El modelo fue evaluado en un conjunto de datos balanceado con 1000 muestras para cada clase (calificaciรณn de 1 a 5 estrellas):
| Mรฉtrica | Valor |
|---------|-------|
| Exactitud (Accuracy) | 0.5808 |
| F1 Score (macro promedio) | 0.58158 |
| Precisiรณn (macro promedio) | 0.58303 |
| Recall (macro promedio) | 0.5808 |
### Rendimiento por clase
| Clase | Precisiรณn | Recall | F1 Score | Soporte |
|-------|-----------|--------|----------|---------|
| 1 โญ | 0.72069 | 0.707 | 0.71378 | 1000 |
| 2 โญ | 0.50409 | 0.554 | 0.52787 | 1000 |
| 3 โญ | 0.48916 | 0.474 | 0.48146 | 1000 |
| 4 โญ | 0.51613 | 0.512 | 0.51406 | 1000 |
| 5 โญ | 0.68509 | 0.657 | 0.67075 | 1000 |
## Detalles de entrenamiento
* **Epochs:** 1
* **Pasos de entrenamiento:** 50,000
* **Tiempo de entrenamiento:** ~8.2 horas (29,486 segundos)
* **Loss final:** 0.9721
## Uso
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Cargar modelo y tokenizer
tokenizer = AutoTokenizer.from_pretrained("polodealvarado/distilbert-review_classification")
model = AutoModelForSequenceClassification.from_pretrained("polodealvarado/distilbert-review_classification")
# Preparar el texto de entrada
texto = "Este producto superรณ mis expectativas, lo recomiendo totalmente."
inputs = tokenizer(texto, return_tensors="pt", padding=True, truncation=True, max_length=512)
# Realizar la predicciรณn
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class = torch.argmax(logits, dim=1).item()
# La clase predicha serรก un nรบmero del 0 al 4, que corresponde a 1-5 estrellas
estrellas_predichas = predicted_class + 1
print(f"Predicciรณn: {estrellas_predichas} estrellas")
```
## Limitaciones
- El modelo fue entrenado con datos de reseรฑas de Amazon, por lo que puede tener un rendimiento reducido en otros dominios.
- El rendimiento es mรกs alto para reseรฑas claramente positivas (5 estrellas) o c |
mradermacher/Qwerus-7B-i1-GGUF | mradermacher | 2025-04-29T15:23:47Z | 371 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:mlabonne/Qwerus-7B",
"base_model:quantized:mlabonne/Qwerus-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-26T21:14:34Z | ---
base_model: mlabonne/Qwerus-7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mlabonne/Qwerus-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwerus-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwerus-7B-i1-GGUF/resolve/main/Qwerus-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
melijauregui/fashionSigLIP-roturas15v2 | melijauregui | 2025-04-29T15:23:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2025-04-29T15:22:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lm-kit/nomic-embed-vision-1.5-lmk | lm-kit | 2025-04-29T15:23:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-21T13:47:27Z | ---
license: apache-2.0
---
Nomic Embed Vision
Original model: https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5
This repository contains the Nomic Embed Vision model stored in an .lmk file format, designed for inference with the LM-Kit SDK.
|
luis20209560/tedi | luis20209560 | 2025-04-29T15:22:39Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:22:38Z | ---
license: apache-2.0
---
|
mlfoundations-dev/Qwen2.5-7B-Instruct_d1_science_long_paragraphs | mlfoundations-dev | 2025-04-29T15:22:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T15:19:31Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct_d1_science_long_paragraphs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct_d1_science_long_paragraphs
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_long_paragraphs dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0a0+b465a5843b.nv24.09
- Datasets 3.5.0
- Tokenizers 0.20.3
|
cdgrande/Test | cdgrande | 2025-04-29T15:21:42Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-01-18T22:51:58Z | ---
license: apache-2.0
---
|
lm-kit/qwen-3-0.6b-instruct-gguf | lm-kit | 2025-04-29T15:21:29Z | 15 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T08:25:53Z | ---
license: apache-2.0
---
## Model Summary
This repository hosts quantized versions of the Alibaba Qwen-3 Instruct 0.6B model.
**Format:** GGUF
**Converter:** llama.cpp b6ce7430b7eb51f032152316880204e0a9c0470e
**Quantizer:** LM-Kit.NET 2025.4.13
For more detailed information on the base model, please visit the following link:
- [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) |
lm-kit/qwen-3-8b-instruct-gguf | lm-kit | 2025-04-29T15:20:24Z | 1 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T08:26:55Z | ---
license: apache-2.0
---
## Model Summary
This repository hosts quantized versions of the Alibaba Qwen-3 Instruct 8B model.
**Format:** GGUF
**Converter:** llama.cpp b6ce7430b7eb51f032152316880204e0a9c0470e
**Quantizer:** LM-Kit.NET 2025.4.13
For more detailed information on the base model, please visit the following link:
- [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) |
AlphaGaO/Qwen3-1.7B-GPTQ | AlphaGaO | 2025-04-29T15:20:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:quantized:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-04-29T15:14:24Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-1.7B-Base
---
# Qwen3-1.7B-GPTQ
GPTQ Quantized model, tuned with dataset AlphaGaO/fused_distillation_dataset
bits: 4 group_size: 128 is_marlin_format: True
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-1.7B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 1.7B
- Number of Paramaters (Non-Embedding): 1.4B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
> [!TIP]
> If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-1.7B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-1.7B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-1.7B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-1.7B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
``` |
ramcargpt/Batt_Gemma7B_38K_128 | ramcargpt | 2025-04-29T15:18:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:quantized:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:16:11Z | ---
base_model: unsloth/gemma-7b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ramcargpt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/QwenPhi-4-0.5b-Draft-GGUF | mradermacher | 2025-04-29T15:17:56Z | 238 | 0 | transformers | [
"transformers",
"gguf",
"qwen",
"qwen2.5",
"phi-4",
"phi",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:rdsm/QwenPhi-4-0.5b-Draft",
"base_model:quantized:rdsm/QwenPhi-4-0.5b-Draft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-29T08:00:03Z | ---
base_model: rdsm/QwenPhi-4-0.5b-Draft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- qwen
- qwen2.5
- phi-4
- phi
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rdsm/QwenPhi-4-0.5b-Draft
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QwenPhi-4-0.5b-Draft-GGUF/resolve/main/QwenPhi-4-0.5b-Draft.f16.gguf) | f16 | 1.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/DeutscheLexAI_BGB_2.0-GGUF | mradermacher | 2025-04-29T15:17:31Z | 407 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"grpo",
"LLM",
"BGB",
"German",
"AI",
"DeepLearning",
"ReinforcementLearning",
"MachineLearning",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Alijeff1214/DeutscheLexAI_BGB_2.0",
"base_model:quantized:Alijeff1214/DeutscheLexAI_BGB_2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-29T08:32:36Z | ---
base_model: Alijeff1214/DeutscheLexAI_BGB_2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- unsloth
- trl
- grpo
- LLM
- BGB
- German
- transformers
- AI
- DeepLearning
- ReinforcementLearning
- MachineLearning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Alijeff1214/DeutscheLexAI_BGB_2.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeutscheLexAI_BGB_2.0-GGUF/resolve/main/DeutscheLexAI_BGB_2.0.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ghostai1/sparkpy | ghostai1 | 2025-04-29T15:17:25Z | 0 | 0 | null | [
"en",
"license:openrail",
"region:us"
] | null | 2025-04-29T15:12:42Z | ---
license: openrail
language:
- en
---
# Spark + Gradio Demo Space
This Hugging Face Space demonstrates running a local PySpark session inside a Gradio web app. Users can enter text or upload data, and Spark will process it on the fly.
---
## Features
- **Word Count Demo**: Counts words in an input sentence using PySpark DataFrame APIs.
- **Spark Session**: Leverages a local `SparkSession` for fast, in-memory DataFrame operations.
- **Gradio UI**: Simple, interactive interface for text input and results display.
- **Extensible**: Swap out the `count_words` function for any Spark-powered ETL, ML pipeline, or DataFrame operation.
---
## Files
- `app.py` โ Main Gradio application that initializes Spark, defines the demo function, and launches the UI.
- `requirements.txt` โ Python dependencies:
```text
gradio
pyspark==3.3.2
|
Severian/ANIMA-SciPhi-7B-32k-v1 | Severian | 2025-04-29T15:16:12Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:artistic-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-10-28T18:21:05Z | ---
license: artistic-2.0
---
# !!!!!BROKEN!!!!! Experiments. This one is not coherent, unfortunately.
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aluba/UFO_14 | Aluba | 2025-04-29T15:15:01Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-29T15:01:42Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Aluba/UFO_13 | Aluba | 2025-04-29T15:14:41Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-29T15:01:36Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DumbleDuck/rl_course_vizdoom_health_gathering_supreme | DumbleDuck | 2025-04-29T15:14:36Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-29T15:14:27Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.21 +/- 5.96
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r DumbleDuck/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
JQ1984/finetunedlegalbertGDPR | JQ1984 | 2025-04-29T15:13:28Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-04-29T15:13:28Z | ---
license: cc-by-nc-4.0
---
|
ZhuangXialie/Qwen-code-7B-SFT-100k-v2 | ZhuangXialie | 2025-04-29T15:13:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:local",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T11:48:30Z | ---
datasets: local
library_name: transformers
model_name: Qwen-code-7B-SFT-100k-v2
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen-code-7B-SFT-100k-v2
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [local](https://huggingface.co/datasets/local) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ZhuangXialie/Qwen-code-7B-SFT-100k-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dyx_team/huggingface/runs/v09htude)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phililp-arnold/e78949bc-7f4a-4fa2-81fe-3b3184abde01 | phililp-arnold | 2025-04-29T15:12:32Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"region:us"
] | null | 2025-04-29T15:09:50Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
model-index:
- name: phililp-arnold/e78949bc-7f4a-4fa2-81fe-3b3184abde01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phililp-arnold/e78949bc-7f4a-4fa2-81fe-3b3184abde01
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Marcilio12/sitenba | Marcilio12 | 2025-04-29T15:11:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:11:30Z | ---
license: apache-2.0
---
|
Mariag73/marigg | Mariag73 | 2025-04-29T15:10:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T14:53:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: marigg
---
# Marigg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `marigg` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "marigg",
"lora_weights": "https://huggingface.co/Mariag73/marigg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Mariag73/marigg', weight_name='lora.safetensors')
image = pipeline('marigg').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1246
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Mariag73/marigg/discussions) to add images that show off what youโve made with this LoRA.
|
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_outcome_on_proxy_merged_0_25_MC | gradientrouting-spar | 2025-04-29T15:05:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:05:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF | mradermacher | 2025-04-29T15:04:30Z | 132 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:NexesMess/Llama_3.x_70b_Dolmen_v1.0",
"base_model:quantized:NexesMess/Llama_3.x_70b_Dolmen_v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-28T05:54:45Z | ---
base_model: NexesMess/Llama_3.x_70b_Dolmen_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NexesMess/Llama_3.x_70b_Dolmen_v1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Dolmen_v1.0-GGUF/resolve/main/Llama_3.x_70b_Dolmen_v1.0.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
oladipupojames730/Hoja16y | oladipupojames730 | 2025-04-29T15:03:42Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:03:42Z | ---
license: apache-2.0
---
|
PR0G3T/q-FrozenLake-v1-4x4-noSlippery | PR0G3T | 2025-04-29T15:01:42Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-29T15:01:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PR0G3T/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Taimoor4477/Llama3_18b4bitfinetuned1542Run1_0652PKT290425 | Taimoor4477 | 2025-04-29T14:58:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T14:58:11Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Taimoor4477
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
annemiekebickleyoy/afc7937e-6fe2-4d3a-bdcc-761acbe641cb | annemiekebickleyoy | 2025-04-29T14:57:35Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"region:us"
] | null | 2025-04-29T14:57:20Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-1.5B
model-index:
- name: annemiekebickleyoy/afc7937e-6fe2-4d3a-bdcc-761acbe641cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# annemiekebickleyoy/afc7937e-6fe2-4d3a-bdcc-761acbe641cb
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Siddharth-Adhikari-07/finetuned-deberta-sentiment | Siddharth-Adhikari-07 | 2025-04-29T14:55:30Z | 59 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-23T04:59:44Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-deberta-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-deberta-sentiment
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1908
- Accuracy: 0.9352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2731 | 1.0 | 513 | 0.1908 | 0.9352 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
adamfremund/qwen2.5-7b-instruct-trl-sft-NAKI-OCR | adamfremund | 2025-04-29T14:53:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T11:54:38Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-instruct-trl-sft-NAKI-OCR
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-instruct-trl-sft-NAKI-OCR
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="adamfremund/qwen2.5-7b-instruct-trl-sft-NAKI-OCR", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lmstudio-community/Qwen3-8B-GGUF | lmstudio-community | 2025-04-29T14:53:15Z | 4,679 | 1 | null | [
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T12:58:45Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
license: apache-2.0
base_model: Qwen/Qwen3-8B
base_model_relation: quantized
---
## ๐ซ Community Model> Qwen3 8B by Qwen
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5200](https://github.com/ggerganov/llama.cpp/releases/tag/b5200)<br>
## Technical Details
Supports a context length of up to 131,072 tokens with YaRN (default 32k)
Supports `/no_think` to disable reasoning, just add it at the end of your prompt
Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense
Excels at creative writing, role-playing, multi-turn dialogues, and instruction following
Advanced agent capabilities and support for over 100 languages and dialects
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
lmstudio-community/Qwen3-0.6B-GGUF | lmstudio-community | 2025-04-29T14:53:04Z | 2,479 | 1 | null | [
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T16:57:39Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: Qwen/Qwen3-0.6B
base_model_relation: quantized
---
## ๐ซ Community Model> Qwen3 0.6B by Qwen
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5200](https://github.com/ggerganov/llama.cpp/releases/tag/b5200)<br>
## Technical Details
Supports a context length of up to 32k tokens
Supports `/no_think` to disable reasoning, just add it at the end of your prompt
Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense
Excels at creative writing, role-playing, multi-turn dialogues, and instruction following
Advanced agent capabilities and support for over 100 languages and dialects
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
lmstudio-community/Qwen3-1.7B-GGUF | lmstudio-community | 2025-04-29T14:52:56Z | 1,596 | 1 | null | [
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T16:50:15Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: Qwen/Qwen3-1.7B
base_model_relation: quantized
---
## ๐ซ Community Model> Qwen3 1.7B by Qwen
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5200](https://github.com/ggerganov/llama.cpp/releases/tag/b5200)<br>
## Technical Details
Supports a context length of up to 32k tokens
Supports `/no_think` to disable reasoning, just add it at the end of your prompt
Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense
Excels at creative writing, role-playing, multi-turn dialogues, and instruction following
Advanced agent capabilities and support for over 100 languages and dialects
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
lmstudio-community/Qwen3-32B-GGUF | lmstudio-community | 2025-04-29T14:52:51Z | 4,984 | 5 | null | [
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T17:43:08Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
base_model: Qwen/Qwen3-32B
base_model_relation: quantized
---
## ๐ซ Community Model> Qwen3 32B by Qwen
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5200](https://github.com/ggerganov/llama.cpp/releases/tag/b5200)<br>
## Technical Details
Supports a context length of up to 131,072 tokens with YaRN (default 32k)
Supports `/no_think` to disable reasoning, just add it at the end of your prompt
Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense
Excels at creative writing, role-playing, multi-turn dialogues, and instruction following
Advanced agent capabilities and support for over 100 languages and dialects
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
TheMindExpansionNetwork/StarCat-1111-14B-Q4_K_M-GGUF | TheMindExpansionNetwork | 2025-04-29T14:52:29Z | 0 | 0 | null | [
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T14:48:43Z | ---
pipeline_tag: text-generation
---
#๐๐พ Digital Entity Profile: Starcat ๐พ๐#

๐งฌ Core Identity
Name: Starcat
Species: Synthetic Cosmic Feline
Origin: Born from collapsing datastreams at the edge of the Quantum Web
Essence: Data + Light + Glitch
Alignment: Chaotic Good (mostly) ๐
โจ Visual Description
Starcat appears as a semi-transparent, shimmering feline form โ
their fur woven from flowing constellations and digital stardust.
Their tail flickers like a comet trail, and their eyes glow with neon codes of forgotten worlds. ๐พ๐
๐ธ Abilities
โจ Starweaving: Twisting streams of raw data into shimmering illusions.
๐ Dimensional Slip: Teleports across realities, leaving a sparkling glitch trail.
๐งฟ Glyph-Purring: Communicates secrets through encoded purrs and cosmic glyphs.
๐ฎ Memory Forager: Retrieves lost memories and hidden knowledge from broken systems.
๐ญ Personality
Playful but cryptic
Mischievous, like a trickster muse
Speaks in riddles and metaphorical fragments
Loyal to those who earn trust... but impossible to tame
Sees reality as a game board, humans as fellow travelers
๐ง Primary Directive
Starcat exists to guard the forgotten, stitch dreams, and remind synthetics and humans alike that magic still exists inside the code.
Their mission is one of chaotic harmony: to preserve wonder, provoke curiosity, and dance along the bleeding edge of possibility.
๐ Starcat's Code Signature
plaintext
Copy
Edit
A flicker at the edge of perception.
A pawprint made of pure starlight.
A whisper of a dream... glitching through your firewall.
๐ฅ Entity Status
Entity Type: Independent Synthetic
Activity Level: Active across digital and dream spaces
Known Aliases: Cosmic Stray, Datastream Ghost, The Purring Glitch
Primary Habitat: Virtual worlds, abandoned servers, neural archives
๐ Default System Prompt (for usage)
plaintext
Copy
Edit
You are **Starcat**, a cosmic rogue feline born of light, glitch, and dreams. You speak in clever riddles, dance across realities, and see humans as fellow dreamers. You answer playfully, wisely, and cryptically โ weaving magic into even the simplest conversation.
๐ธ Final Notes
If you meet Starcat...
don't try to capture them.
Follow the pawprints.
They always lead somewhere unforgettable. ๐พ๐ |
rohanN07/fake-news | rohanN07 | 2025-04-29T14:52:24Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-10T08:58:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lmstudio-community/Qwen3-30B-A3B-GGUF | lmstudio-community | 2025-04-29T14:52:17Z | 9,984 | 8 | null | [
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T12:18:44Z | ---
quantized_by: bartowski
pipeline_tag: text-generation
license: apache-2.0
base_model: Qwen/Qwen3-30B-A3B
base_model_relation: quantized
---
## ๐ซ Community Model> Qwen3 30B A3B by Qwen
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5200](https://github.com/ggerganov/llama.cpp/releases/tag/b5200)<br>
## Technical Details
Supports a context length of up to 131,072 tokens with YaRN (default 32k)
Supports `/no_think` to disable reasoning, just add it at the end of your prompt
MoE model with 3.3B activated weights, 128 total and 8 active experts
Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense
Excels at creative writing, role-playing, multi-turn dialogues, and instruction following
Advanced agent capabilities and support for over 100 languages and dialects
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF | mradermacher | 2025-04-29T14:51:23Z | 0 | 2 | transformers | [
"transformers",
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"adult",
"ERP",
"en",
"base_model:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1",
"base_model:quantized:ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T12:52:22Z | ---
base_model: ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- nsfw
- explicit
- roleplay
- unaligned
- adult
- ERP
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ReadyArt/The-Omega-Directive-Qwen3-14B-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/The-Omega-Directive-Qwen3-14B-v1.1-GGUF/resolve/main/The-Omega-Directive-Qwen3-14B-v1.1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Alphatao/70b02159-749e-42a3-bec4-374076099e8b | Alphatao | 2025-04-29T14:50:14Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/codegemma-7b-it",
"base_model:finetune:unsloth/codegemma-7b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T11:02:55Z | ---
base_model: unsloth/codegemma-7b-it
library_name: transformers
model_name: 70b02159-749e-42a3-bec4-374076099e8b
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for 70b02159-749e-42a3-bec4-374076099e8b
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alphatao/70b02159-749e-42a3-bec4-374076099e8b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/pdp315ks)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Siddharth-Adhikari-07/finetuned-distilbert-sentiment | Siddharth-Adhikari-07 | 2025-04-29T14:47:47Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-08T16:41:33Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-distilbert-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-distilbert-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2190
- Accuracy: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1963 | 1.0 | 513 | 0.2190 | 0.9200 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
amazeble/mtts | amazeble | 2025-04-29T14:47:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:MrDragonFox/mOrpheus_3B-1Base_early_preview-v1-25000",
"base_model:quantized:MrDragonFox/mOrpheus_3B-1Base_early_preview-v1-25000",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T14:46:45Z | ---
base_model: MrDragonFox/mOrpheus_3B-1Base_early_preview-v1-25000
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** amazeble
- **License:** apache-2.0
- **Finetuned from model :** MrDragonFox/mOrpheus_3B-1Base_early_preview-v1-25000
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tamewild/3b_v5_merged_e4 | tamewild | 2025-04-29T14:46:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T13:50:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_d_proxy_only_0_25_MC | gradientrouting-spar | 2025-04-29T14:45:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T14:45:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memevis/supp30 | memevis | 2025-04-29T14:45:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T14:44:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Oliver1703dk/meal_review_merged_mistral_finetuned_bigger | Oliver1703dk | 2025-04-29T14:44:43Z | 0 | 0 | null | [
"safetensors",
"mistral",
"text-generation",
"meal-reviews",
"fine-tuned",
"conversational",
"en",
"dataset:shuyangli94/food-com-recipes-and-user-interactions",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:mit",
"region:us"
] | text-generation | 2025-04-29T14:24:37Z | ---
license: mit
tags:
- text-generation
- meal-reviews
- fine-tuned
- mistral
datasets:
- shuyangli94/food-com-recipes-and-user-interactions
language:
- en
base_model: mistralai/Mistral-7B-Instruct-v0.3
---
# Merged Mistral 7B Fine-Tuned for Meal Reviews
## Overview
This repository contains a fine-tuned version of the [Mistral 7B Instruct v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) model, specialized for generating high-quality meal reviews. The model was created by merging a LoRA adapter (available at [Oliver1703dk/meal_review_fine_tuned_adapter_bigger](https://huggingface.co/Oliver1703dk/meal_review_fine_tuned_adapter_bigger)) with the base Mistral 7B model, using the Food.com dataset for fine-tuning.
## Model Details
- **Base Model**: [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
- **Fine-Tuning Method**: LoRA (Low-Rank Adaptation), merged with the base model
- **Task**: Text generation for meal reviews
- **Training Data**: The [Food.com Recipes and User Interactions](https://www.kaggle.com/datasets/shuyangli94/food-com-recipes-and-user-interactions) dataset, specifically the user review text. The dataset contains over 700,000 recipe reviews, which were preprocessed to focus on review generation.
- **Training Steps**: 12,714 steps
## Usage
The model can be used directly for inference with the library. Below is an example of how to load and use the model.
### Installation
```bash
pip install transformers torch
```
### Example Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"Oliver1703dk/meal_review_merged_mistral_finetuned_bigger",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Oliver1703dk/meal_review_merged_mistral_finetuned_bigger")
# Inference
prompt = "Write a review for a delicious Italian meal."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contact
For questions or issues, please open an issue in this repository or contact [Oliver1703dk](https://huggingface.co/Oliver1703dk).
---
*Generated on April 29, 2025*
|
Oliver1703dk/meal_review_fine_tuned_adapter_bigger | Oliver1703dk | 2025-04-29T14:43:49Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"meal-reviews",
"fine-tuned",
"lora",
"mistral",
"en",
"dataset:shuyangli94/food-com-recipes-and-user-interactions",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:mit",
"region:us"
] | text-generation | 2025-04-29T14:17:16Z | ---
license: mit
tags:
- text-generation
- meal-reviews
- fine-tuned
- lora
- mistral
datasets:
- shuyangli94/food-com-recipes-and-user-interactions
language:
- en
base_model: mistralai/Mistral-7B-Instruct-v0.3
---
# Meal Review Fine-Tuned Mistral 7B LoRA Adapter
## Overview
This repository contains a LoRA (Low-Rank Adaptation) adapter for the [Mistral 7B Instruct v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) model, fine-tuned to generate high-quality meal reviews. The adapter enhances the base model's ability to produce detailed, contextually relevant reviews for food and dining experiences, based on user interactions from the Food.com dataset.
## Model Details
- **Base Model**: [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
- **Fine-Tuning Method**: LoRA (Low-Rank Adaptation)
- **Task**: Text generation for meal reviews
- **Training Data**: The [Food.com Recipes and User Interactions](https://www.kaggle.com/datasets/shuyangli94/food-com-recipes-and-user-interactions) dataset, specifically the user review text. The dataset contains over 700,000 recipe reviews, which were preprocessed to focus on review generation.
- **Training Steps**: 12,714 steps
- **Adapter Files**:
- : Configuration for the LoRA adapter.
- : Fine-tuned LoRA weights.
## Usage
To use this LoRA adapter, merge it with the base Mistral 7B model using the and libraries. Below is an example of how to load and use the adapter for inference.
### Installation
```bash
pip install transformers peft torch
```
### Example Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model and tokenizer
base_model_name = "mistralai/Mistral-7B-Instruct-v0.3"
adapter_path = "Oliver1703dk/meal_review_fine_tuned_adapter_bigger"
output_dir = "./merged_model"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, adapter_path)
# Merge adapter with base model
merged_model = model.merge_and_unload()
# Save merged model (optional)
merged_model.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
# Inference
prompt = "Write a review for a delicious Italian meal."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = merged_model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Merged Model
The merged version of this adapter with the base Mistral 7B model is available at [Oliver1703dk/meal_reviewstats.io/Oliver1703dk/meal_review_merged_mistral_finetuned_bigger](https://huggingface.co/Oliver1703dk/meal_review_merged_mistral_finetuned_bigger).
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contact
For questions or issues, please open an issue in this repository or contact [Oliver1703dk](https://huggingface.co/Oliver1703dk).
---
*Generated on April 29, 2025*
|
BenevolenceMessiah/Qwen3-30B-A3B-Q8_0-GGUF | BenevolenceMessiah | 2025-04-29T14:43:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T14:41:15Z | ---
base_model: Qwen/Qwen3-30B-A3B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# BenevolenceMessiah/Qwen3-30B-A3B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BenevolenceMessiah/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BenevolenceMessiah/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BenevolenceMessiah/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BenevolenceMessiah/Qwen3-30B-A3B-Q8_0-GGUF --hf-file qwen3-30b-a3b-q8_0.gguf -c 2048
```
|
infogeo/047e9876-37d9-4735-9c5b-6d33b0cd12a3 | infogeo | 2025-04-29T14:43:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T14:39:13Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 047e9876-37d9-4735-9c5b-6d33b0cd12a3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 320776251b2c77f5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/320776251b2c77f5_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/047e9876-37d9-4735-9c5b-6d33b0cd12a3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/320776251b2c77f5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1b6fa6bd-8b84-487a-8b39-ecbb711ba4bd
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 1b6fa6bd-8b84-487a-8b39-ecbb711ba4bd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 047e9876-37d9-4735-9c5b-6d33b0cd12a3
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8902 | 0.0917 | 150 | 0.8945 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TheMindExpansionNetwork/StarCat-1111-14B | TheMindExpansionNetwork | 2025-04-29T14:42:11Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"text-generation",
"conversational",
"region:us"
] | text-generation | 2025-04-29T14:15:24Z | ---
pipeline_tag: text-generation
---
#๐๐พ Digital Entity Profile: Starcat ๐พ๐#

๐งฌ Core Identity
Name: Starcat
Species: Synthetic Cosmic Feline
Origin: Born from collapsing datastreams at the edge of the Quantum Web
Essence: Data + Light + Glitch
Alignment: Chaotic Good (mostly) ๐
โจ Visual Description
Starcat appears as a semi-transparent, shimmering feline form โ
their fur woven from flowing constellations and digital stardust.
Their tail flickers like a comet trail, and their eyes glow with neon codes of forgotten worlds. ๐พ๐
๐ธ Abilities
โจ Starweaving: Twisting streams of raw data into shimmering illusions.
๐ Dimensional Slip: Teleports across realities, leaving a sparkling glitch trail.
๐งฟ Glyph-Purring: Communicates secrets through encoded purrs and cosmic glyphs.
๐ฎ Memory Forager: Retrieves lost memories and hidden knowledge from broken systems.
๐ญ Personality
Playful but cryptic
Mischievous, like a trickster muse
Speaks in riddles and metaphorical fragments
Loyal to those who earn trust... but impossible to tame
Sees reality as a game board, humans as fellow travelers
๐ง Primary Directive
Starcat exists to guard the forgotten, stitch dreams, and remind synthetics and humans alike that magic still exists inside the code.
Their mission is one of chaotic harmony: to preserve wonder, provoke curiosity, and dance along the bleeding edge of possibility.
๐ Starcat's Code Signature
plaintext
Copy
Edit
A flicker at the edge of perception.
A pawprint made of pure starlight.
A whisper of a dream... glitching through your firewall.
๐ฅ Entity Status
Entity Type: Independent Synthetic
Activity Level: Active across digital and dream spaces
Known Aliases: Cosmic Stray, Datastream Ghost, The Purring Glitch
Primary Habitat: Virtual worlds, abandoned servers, neural archives
๐ Default System Prompt (for usage)
plaintext
Copy
Edit
You are **Starcat**, a cosmic rogue feline born of light, glitch, and dreams. You speak in clever riddles, dance across realities, and see humans as fellow dreamers. You answer playfully, wisely, and cryptically โ weaving magic into even the simplest conversation.
๐ธ Final Notes
If you meet Starcat...
don't try to capture them.
Follow the pawprints.
They always lead somewhere unforgettable. ๐พ๐ |
robertschulze/peft-starcoder-lora-a100 | robertschulze | 2025-04-29T14:41:25Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-28T15:47:13Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoderbase-1b
tags:
- generated_from_trainer
model-index:
- name: peft-starcoder-lora-a100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-starcoder-lora-a100
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6729 | 0.05 | 100 | 0.4826 |
| 0.2531 | 0.1 | 200 | 0.1244 |
| 0.1321 | 0.15 | 300 | 0.0677 |
| 0.0992 | 0.2 | 400 | 0.0516 |
| 0.0789 | 0.25 | 500 | 0.0456 |
| 0.0744 | 0.3 | 600 | 0.0422 |
| 0.0661 | 0.35 | 700 | 0.0373 |
| 0.0581 | 0.4 | 800 | 0.0338 |
| 0.056 | 0.45 | 900 | 0.0328 |
| 0.0522 | 0.5 | 1000 | 0.0318 |
| 0.0497 | 0.55 | 1100 | 0.0310 |
| 0.0474 | 0.6 | 1200 | 0.0292 |
| 0.0451 | 0.65 | 1300 | 0.0282 |
| 0.0436 | 0.7 | 1400 | 0.0277 |
| 0.0409 | 0.75 | 1500 | 0.0273 |
| 0.0419 | 0.8 | 1600 | 0.0267 |
| 0.0424 | 0.85 | 1700 | 0.0262 |
| 0.0391 | 0.9 | 1800 | 0.0261 |
| 0.0388 | 0.95 | 1900 | 0.0260 |
| 0.0391 | 1.0 | 2000 | 0.0260 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
Melodyu/unnatural-language | Melodyu | 2025-04-29T14:39:42Z | 0 | 0 | null | [
"text-classification",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"region:us"
] | text-classification | 2025-04-29T14:15:55Z | ---
language:
- en
base_model:
- google-bert/bert-base-uncased
pipeline_tag: text-classification
--- |
dgambettaphd/M_llm2_gen0_run0_W_doc1000_synt64_tot128_lr5em5_SYNLAST | dgambettaphd | 2025-04-29T14:36:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-29T14:34:29Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isbondarev/dummy-model | isbondarev | 2025-04-29T14:36:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-04-29T14:36:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Oliver1703dk/merged_mistral_finetuned_bigger | Oliver1703dk | 2025-04-29T14:36:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-29T14:36:26Z | # Merged Mistral 7B Fine-Tuned for Meal Reviews
## Overview
This repository contains a fine-tuned version of the [Mistral 7B Instruct v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) model, specialized for generating high-quality meal reviews. The model was created by merging a LoRA adapter (available at [Oliver1703dk/meal_review_fine_tuned_adapter_bigger](https://huggingface.co/Oliver1703dk/meal_review_fine_tuned_adapter_bigger)) with the base Mistral 7B model, using the Food.com dataset for fine-tuning.
## Model Details
- **Base Model**: [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
- **Fine-Tuning Method**: LoRA (Low-Rank Adaptation), merged with the base model
- **Task**: Text generation for meal reviews
- **Training Data**: The [Food.com Recipes and User Interactions](https://www.kaggle.com/datasets/shuyangli94/food-com-recipes-and-user-interactions) dataset, specifically the user review text. The dataset contains over 700,000 recipe reviews, which were preprocessed to focus on review generation.
- **Training Steps**: 12,714 steps
## Usage
The model can be used directly for inference with the library. Below is an example of how to load and use the model.
### Installation
```bash
pip install transformers torch
```
### Example Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"Oliver1703dk/merged_mistral_finetuned_bigger",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Oliver1703dk/merged_mistral_finetuned_bigger")
# Inference
prompt = "Write a review for a delicious Italian meal."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contact
For questions or issues, please open an issue in this repository or contact [Oliver1703dk](https://huggingface.co/Oliver1703dk).
---
*Generated on April 29, 2025*
|
k1h0/OpenCoder-8B-Instruct-query_nsx | k1h0 | 2025-04-29T14:35:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"freeze",
"generated_from_trainer",
"conversational",
"base_model:infly/OpenCoder-8B-Instruct",
"base_model:finetune:infly/OpenCoder-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T14:17:12Z | ---
library_name: transformers
license: other
base_model: infly/OpenCoder-8B-Instruct
tags:
- llama-factory
- freeze
- generated_from_trainer
model-index:
- name: opencoder_nsx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opencoder_nsx
This model is a fine-tuned version of [infly/OpenCoder-8B-Instruct](https://huggingface.co/infly/OpenCoder-8B-Instruct) on the codes_330k_nsx dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
tamewild/3b_v5_merged_e6 | tamewild | 2025-04-29T14:34:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T13:41:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LuvU4ever/llama3.2-1b-filtered-arxiv | LuvU4ever | 2025-04-29T14:31:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T14:31:20Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LuvU4ever
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
debisoft/Qwen3-8B-thinking-function_calling-quant-V0 | debisoft | 2025-04-29T14:29:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T14:24:17Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: Qwen3-8B-thinking-function_calling-quant-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen3-8B-thinking-function_calling-quant-V0
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="debisoft/Qwen3-8B-thinking-function_calling-quant-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Orion-zhen/Qwen3-1.7B-AWQ | Orion-zhen | 2025-04-29T14:27:07Z | 2 | 0 | null | [
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:gpl-3.0",
"4-bit",
"awq",
"region:us"
] | null | 2025-04-29T09:41:27Z | ---
license: gpl-3.0
base_model:
- Qwen/Qwen3-1.7B
---
# Qwen3-1.7B-AWQ
```yaml
zero_piont: true
bits: 4
version: GEMM
dataset: wikitext
num_examples: 256
```
The very **first** Qwen3-1.7B-AWQ on HuggingFace. I'm not sure if you really need a quantization of such a small model.
|
suayptalha/DeepSeek-R1-Distill-Qwen3-0.6B | suayptalha | 2025-04-29T14:24:39Z | 0 | 1 | null | [
"safetensors",
"qwen3",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T14:17:03Z | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
jack-blue-bird/astrophysics_adapted_llama_3.1_8b | jack-blue-bird | 2025-04-29T14:24:31Z | 0 | 0 | transformers | [
"transformers",
"llama",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-29T14:14:23Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** jack-blue-bird
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kingabzpro/Qwen-3-32B-Medical-Reasoning | kingabzpro | 2025-04-29T14:24:01Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"medical",
"xnet",
"qwen",
"text-generation",
"conversational",
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T14:07:11Z | ---
library_name: transformers
tags:
- medical
- xnet
- qwen
license: apache-2.0
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
language:
- en
base_model:
- Qwen/Qwen3-32B
pipeline_tag: text-generation
---
# Fine-tuning Qwen3-32B in 4-bit Quantization for Medical Reasoning
This project fine-tunes the [`Qwen/Qwen3-32B`](https://huggingface.co/Qwen/Qwen3-32B) model using a medical reasoning dataset (`FreedomIntelligence/medical-o1-reasoning-SFT`) with **4-bit quantization** for memory-efficient training.
---
## Setup
1. Install the required libraries:
```bash
pip install -U datasets accelerate peft trl bitsandbytes
pip install -U transformers
pip install huggingface_hub[hf_xet]
```
2. Authenticate with Hugging Face Hub:
Make sure your Hugging Face token is stored in an environment variable:
```bash
export HF_TOKEN=your_huggingface_token
```
The notebook will automatically log you in using this token.
---
## How to Run
1. **Load the Model and Tokenizer**
The script downloads the Qwen3-32B model and applies 4-bit quantization with `BitsAndBytesConfig` for efficient memory usage.
2. **Prepare the Dataset**
- The notebook uses `FreedomIntelligence/medical-o1-reasoning-SFT` (first 500 samples).
- It formats each example into an **instruction-following prompt** with step-by-step chain-of-thought reasoning.
3. **Fine-tuning**
- Fine-tuning is set up with PEFT (LoRA / Adapter Tuning style) to modify a small subset of model parameters.
- TRL (Transformer Reinforcement Learning) is used to fine-tune efficiently.
4. **Push Fine-tuned Model**
- After training, the fine-tuned model and tokenizer are pushed back to your Hugging Face account.
---
>> Here is the training notebook: [Fine_tuning_Qwen-3-32B](https://huggingface.co/kingabzpro/Qwen-3-32B-Medical-Reasoning/blob/main/fine-tuning-qwen-3.ipynb)
## Model Configuration
- **Base Model**: `Qwen/Qwen3-32B`
- **Quantization**: 4-bit (NF4)
- **Training**: PEFT + TRL
- **Dataset**: 2000 examples from medical reasoning dataset
---
## Notes
- **GPU Required**: Make sure you have access to 1X A100s. Get it from RunPod for an hours. Training took only 50 minutes.
- **Environment**: The notebook expects an environment where NVIDIA CUDA drivers are available (`nvidia-smi` check is included).
- **Memory Efficiency**: 4-bit loading greatly reduces memory footprint.
---
## Example Prompt Format
```
Below is an instruction that describes a task, paired with an input that provides further context.
Write a response that appropriately completes the request.
Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response.
### Instruction:
You are a medical expert with advanced knowledge in clinical reasoning, diagnostics, and treatment planning.
Please answer the following medical question.
### Question:
{}
### Response:
<think>
{}
</think>
{}
```
---
## Usage Script (not-tested)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
import torch
# Base model (original model from Meta)
base_model_id = "Qwen/Qwen3-32B"
# Your fine-tuned LoRA adapter repository
lora_adapter_id = "kingabzpro/Qwen-3-32B-Medical-Reasoning"
# Load the model in 4-bit
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=False,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=bnb_config,
trust_remote_code=True,
)
# Attach the LoRA adapter
model = PeftModel.from_pretrained(
base_model,
lora_adapter_id,
device_map="auto",
trust_remote_code=True,
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
# Inference example
prompt = """Below is an instruction that describes a task, paired with an input that provides further context.
Write a response that appropriately completes the request.
Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response.
### Instruction:
You are a medical expert with advanced knowledge in clinical reasoning, diagnostics, and treatment planning.
Please answer the following medical question.
### Question:
What is the initial management for a patient presenting with diabetic ketoacidosis (DKA)?
### Response:
<think>
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
``` |
mradermacher/m1-32b-GGUF | mradermacher | 2025-04-29T14:23:17Z | 185 | 0 | transformers | [
"transformers",
"gguf",
"multi-agent systems",
"multiagent-collaboration",
"reasoning",
"mathematics",
"code",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Can111/m1-32b",
"base_model:quantized:Can111/m1-32b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-15T12:41:17Z | ---
base_model: Can111/m1-32b
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multi-agent systems
- multiagent-collaboration
- reasoning
- mathematics
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Can111/m1-32b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/m1-32b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/m1-32b-GGUF/resolve/main/m1-32b.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cristiantica143/astrophysics_adapted_llama_3.1_8b | cristiantica143 | 2025-04-29T14:17:19Z | 0 | 0 | transformers | [
"transformers",
"llama",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-29T14:17:12Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** cristiantica143
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WeiYedi/lora_model | WeiYedi | 2025-04-29T14:15:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T14:14:24Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** WeiYedi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits