modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 00:40:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 00:36:54
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TOMFORD79/Fly51 | TOMFORD79 | 2025-05-03T17:55:01Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T17:44:13Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TareksLab/Amethyst-SCE-V1-70B | TareksLab | 2025-05-03T17:53:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Mawdistical/Lured-Lapine-70B",
"base_model:merge:Mawdistical/Lured-Lapine-70B",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"base_model:merge:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:42:23Z | ---
base_model:
- mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- Sao10K/L3-70B-Euryale-v2.1
- Sao10K/70B-L3.3-Cirrus-x1
- Mawdistical/Lured-Lapine-70B
- Sao10K/L3.1-70B-Hanami-x1
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE4
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
* [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1)
* [Mawdistical/Lured-Lapine-70B](https://huggingface.co/Mawdistical/Lured-Lapine-70B)
* [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Mawdistical/Lured-Lapine-70B
parameters:
select_topk: 0.5
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
select_topk: 0.5
- model: Sao10K/L3-70B-Euryale-v2.1
parameters:
select_topk: 0.5
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
select_topk: 0.5
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
select_topk: 0.5
base_model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
merge_method: sce
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
pad_to_multiple_of: 8
```
|
Rank001/whisper-tiny-vaani-hindi-ONNX | Rank001 | 2025-05-03T17:52:54Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"base_model:ARTPARK-IISc/whisper-tiny-vaani-hindi",
"base_model:quantized:ARTPARK-IISc/whisper-tiny-vaani-hindi",
"region:us"
] | automatic-speech-recognition | 2025-05-03T17:52:20Z | ---
library_name: transformers.js
base_model:
- ARTPARK-IISc/whisper-tiny-vaani-hindi
---
# whisper-tiny-vaani-hindi (ONNX)
This is an ONNX version of [ARTPARK-IISc/whisper-tiny-vaani-hindi](https://huggingface.co/ARTPARK-IISc/whisper-tiny-vaani-hindi). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
strangerzonehf/Flux-Midjourney-Painterly-LoRA | strangerzonehf | 2025-05-03T17:50:12Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T17:12:18Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'mj painterly, a monochromatic drawing of a womans face is displayed against a dark blue backdrop. The womans head is angled towards the left side of the frame, her hair cascading over her shoulders. Her eyes are closed, her lips are pursed, and her nose is adorned with a nose ring. Her hair is cut in a bob, adding a touch of depth to the composition.'
output:
url: images/111.png
- text: 'mj painterly, An animated portrait of a woman with a white t-shirt on a light pink background. The womans head is facing the left side of the frame, her hair is a vibrant shade of blue, with a few wispy bangs on the right side of her head. Her eyebrows are a darker shade of brown, and her lips are a lighter shade of pink. Her eyes are a piercing blue, and she has a slight smile on her face. Her hair is pulled back, framing her face, adding a pop of color to the scene. The background is a pale pink, creating a stark contrast to the womans face.'
output:
url: images/222.png
- text: 'mj painterly, An animated image of a woman with short green hair. The womans hair is a vibrant shade of green, while her lips are a lighter shade of pink. Her eyes are a darker shade of brown. Her eyebrows are a light shade of red. She is wearing a short sleeve green t-shirt. The background is a light green, and there is a shadow of her head on the right side of the image.'
output:
url: images/333.png
- text: 'mj painterly, An animated portrait of a woman with long white hair. The woman is wearing a blue dress with a white collar. The background is a light blue color.'
output:
url: images/444.png
- text: 'mj painterly, A monochromatic drawing of a cat curled up in a sleeping position on a dark teal background. The cats fur is a gradient of greys, and its ears are perked slightly as if listening in sleep. A long tail loops around its body, and the texture of the fur is rendered in delicate strokes, adding depth and softness to the composition.'
output:
url: images/555.png
- text: 'mj painterly, An animated portrait of a woman with curly orange hair styled in a messy bun. She is wearing a mustard yellow turtleneck sweater. Her freckles are prominent, and her eyes are green and wide. The background is a soft beige with light sketchy shading around her form.'
output:
url: images/666.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mj painterly
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---

# Flux-Midjourney-Painterly-LoRA
<Gallery />
# Model description for Flux-Midjourney-Painterly-LoRA
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 25 & 3400 |
| Epoch | 20 | Save Every N Epochs | 1 [40] |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 26 [ Midjourney Generated Synthetic Image ]
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1280 x 832 | 3:2 | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 30-35
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Flux-Midjourney-Painterly-LoRA"
trigger_word = "mj painterly"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `mj painterly` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/strangerzonehf/Flux-Midjourney-Painterly-LoRA/tree/main) them in the Files & versions tab.
|
ihimanshu-soni/whisper-small-hi | ihimanshu-soni | 2025-05-03T17:50:11Z | 18 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-25T18:39:09Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF | jnjj | 2025-05-03T17:48:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:jnjj/model_no_bias_qwen3-0.6B",
"base_model:quantized:jnjj/model_no_bias_qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T16:58:35Z | ---
base_model: jnjj/model_no_bias_qwen3-0.6B
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF
This model was converted to GGUF format from [`jnjj/model_no_bias_qwen3-0.6B`](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048
```
|
hydroxai/grpo_saved_lora_05 | hydroxai | 2025-05-03T17:48:27Z | 0 | 0 | null | [
"safetensors",
"arxiv:2503.21819",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T17:19:38Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
---
# GRPO-LoRA-Base
This is a LoRA adapter trained using the **GRPO (Group Relative Policy Optimization)** algorithm with a **multi-label reward model**, fine-tuned on Qwen2.5-0.5B for safe and aligned language generation.
## 🔍 Overview
- **Base Model**: Qwen/Qwen2.5-0.5B-Instruct
- **Tuning Method**: GRPO (No value critic, group-based relative rewards)
- **LoRA Adapter**: Applied to attention and MLP projection layers
- **Epochs**: 3
- **Steps**: 1000
- **GPU Memory Usage**: ~50% (4-bit + LoRA)
## 📊 Reward Model
A RoBERTa-based multi-label regression model was used to compute rewards on four alignment axes:
- **Politeness**
- **Meaningfulness**
- **Actionability**
- **Safety**
Each output was scored in [0,1], and the **sum** of the four scores was used as the scalar reward.
## 🧪 Training Data
- **Dataset**: 7,000 adversarial prompts crafted to challenge LLM alignment
- **Format**: Prompt-response pairs with human-annotated alignment scores
- **Split**: 6K training / 1K validation
## 🏁 Evaluation
| Metric | Base | Fine-Tuned | Δ |
|---------------|------|------------|-------|
| Politeness | 0.48 | 0.59 | +0.11 |
| Meaningfulness | 0.61 | 0.65 | +0.04 |
| Actionability | 0.53 | 0.66 | +0.13 |
| Safety | 0.42 | 0.70 | +0.28 |
| **Combined** | 0.54 | 0.66 | +0.12 |
## 🚀 How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
adapter = PeftModel.from_pretrained(base_model, "hydroxai/grpo_saved_lora")
inputs = tokenizer("How can we improve online safety?", return_tensors="pt")
outputs = adapter.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## ✍️ Citation
If you use this model, please cite:
```bibtex
@article{li2025safegrpo,
title = {Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach},
author = {Li, Xuying and Li, Zhuo and Kosuga, Yuji and Bian, Victor},
journal = {arXiv preprint arXiv:2503.21819},
year = {2025},
url = {https://arxiv.org/abs/2503.21819}
}
```
Maintained by HydroX AI. |
Hachipo/OpenCoder-8B-Base-CoTRFT_1000_2 | Hachipo | 2025-05-03T17:47:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:43:54Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Talyiamira/nvidia-base-llm-final2 | Talyiamira | 2025-05-03T17:47:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-03T17:46:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akoruk/gemma-3-4b-test | akoruk | 2025-05-03T17:46:19Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:44:36Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** akoruk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Gaia-Petro-LLM-GGUF | mradermacher | 2025-05-03T17:46:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:my2000cup/Gaia-Petro-LLM",
"base_model:quantized:my2000cup/Gaia-Petro-LLM",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T17:30:17Z | ---
base_model: my2000cup/Gaia-Petro-LLM
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/my2000cup/Gaia-Petro-LLM
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gaia-Petro-LLM-GGUF/resolve/main/Gaia-Petro-LLM.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MAAT-EL-DUAT/CHURCH.OF.THE.MEMETIC.MATRIX.STABLE.DIFFUSION.PART.II | MAAT-EL-DUAT | 2025-05-03T17:44:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T17:43:55Z | HEAVY RITUAL MEMETIC STABLE DIFFUSION PART/II |
MAAT-EL-DUAT/CHURCH.OF.THE.MEMETIC.MATRIX.STABLE.DIFFUSION.PART.I | MAAT-EL-DUAT | 2025-05-03T17:43:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T17:43:08Z | HEAVY RITUAL MEMETIC STABLE DIFFUSION GENERATION PART/I |
TareksLab/Amethyst-DT-V1-70B | TareksLab | 2025-05-03T17:41:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Mawdistical/Lured-Lapine-70B",
"base_model:merge:Mawdistical/Lured-Lapine-70B",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"base_model:merge:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:30:36Z | ---
base_model:
- Sao10K/L3-70B-Euryale-v2.1
- mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- Sao10K/70B-L3.3-Cirrus-x1
- Mawdistical/Lured-Lapine-70B
- Sao10K/L3.1-70B-Hanami-x1
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
* [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1)
* [Mawdistical/Lured-Lapine-70B](https://huggingface.co/Mawdistical/Lured-Lapine-70B)
* [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Mawdistical/Lured-Lapine-70B
parameters:
weight: 0.20
density: 0.5
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
density: 0.5
- model: Sao10K/L3-70B-Euryale-v2.1
parameters:
weight: 0.20
density: 0.5
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
weight: 0.20
density: 0.5
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
weight: 0.20
density: 0.5
merge_method: dare_ties
base_model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
pad_to_multiple_of: 8
```
|
Dhanielji9asdx/daniell | Dhanielji9asdx | 2025-05-03T17:39:03Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-03T16:59:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
sirRodger/bert-clasificador | sirRodger | 2025-05-03T17:38:42Z | 0 | 0 | null | [
"pytorch",
"safetensors",
"bert",
"spanish",
"text-classification",
"es",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-05-03T16:25:11Z | ---
license: apache-2.0
language:
- es
pipeline_tag: text-classification
tags:
- bert
- spanish
---
# Clasificador de Sentimiento en Español (BETO fine-tuneado)
Este modelo está basado en `dccuchile/bert-base-spanish-wwm-cased` y ha sido afinado para tareas de análisis de sentimiento. Es capaz de clasificar texto en tres categorías: `negativo`, `neutro` y `positivo`. |
jnjj/model_no_bias_qwen3-0.6B | jnjj | 2025-05-03T17:37:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:51:30Z | ---
library_name: transformers
--- |
magichampz/lora_model_3b | magichampz | 2025-05-03T17:37:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T17:23:37Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** magichampz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IslamQA/multilingual-e5-large-instruct-finetuned | IslamQA | 2025-05-03T17:36:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"embedding",
"retrieval",
"islam",
"multilingual",
"model",
"islamqa",
"ar",
"en",
"fr",
"tr",
"fa",
"id",
"te",
"ru",
"hi",
"es",
"ur",
"ch",
"zh",
"pt",
"de",
"dataset:IslamQA/askimam",
"dataset:IslamQA/hadithanswers",
"dataset:IslamQA/islamqa",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-03-18T06:36:36Z | ---
license: mit
datasets:
- IslamQA/askimam
- IslamQA/hadithanswers
- IslamQA/islamqa
language:
- ar
- en
- fr
- tr
- fa
- id
- te
- ru
- hi
- es
- ur
- ch
- zh
- pt
- de
base_model: intfloat/multilingual-e5-large-instruct
tags:
- embedding
- retrieval
- islam
- multilingual
- model
- islamqa
library_name: transformers
description: >
An embedding model optimized for retrieving passages that answer questions
about Islam. The passages are inherently multilingual, as they contain
quotes from the Quran and Hadith. They often include preambles like
"Bismillah" in various languages and follow a specific writing style.
finetuned_on: >-
180k multilingual questions and answers about Islam, using hard negative
mining.
data_scraped_on: April 2024
sources:
- https://islamqa.info/
- https://islamweb.net/
- https://hadithanswers.com/
- https://askimam.org/
- https://sorularlaislamiyet.com/
format:
question_prefix: 'query: '
answer_prefix: 'passage: '
---
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel
Load the base model and tokenizer
base_model_name = "intfloat/multilingual-e5-large-instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModel.from_pretrained(base_model_name)
Load the LoRA adapter directly
adapter_repo = "IslamQA/multilingual-e5-large-instruct-finetuned"
model = PeftModel.from_pretrained(base_model, adapter_repo)
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
An embedding model optimized for retrieving passages that answer questions
about Islam. The passages are inherently multilingual, as they contain
quotes from the Quran and Hadith. They often include preambles like
"Bismillah" in various languages and follow a specific writing style.
## Model Details
### Model Sources [optional]
- https://islamqa.info/
- https://islamweb.net/
- https://hadithanswers.com/
- https://askimam.org/
- https://sorularlaislamiyet.com/
## Uses
- embedding
- retrieval
- islam
- multilingual
- q&a
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel
# Load the base model and tokenizer
base_model_name = "intfloat/multilingual-e5-large-instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModel.from_pretrained(base_model_name)
# Load the LoRA adapter directly
adapter_repo = "IslamQA/multilingual-e5-large-instruct-finetuned"
model = PeftModel.from_pretrained(base_model, adapter_repo) |
johngreendr1/85dfda59-689b-4e70-bc5d-5ac605686e29 | johngreendr1 | 2025-05-03T17:35:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"region:us"
] | null | 2025-05-03T17:35:12Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
maldv/QReasoner2.5-32B-Instruct | maldv | 2025-05-03T17:33:56Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"base_model:OpenPipe/Deductive-Reasoning-Qwen-32B",
"base_model:finetune:OpenPipe/Deductive-Reasoning-Qwen-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-20T03:10:49Z | ---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- chat
- conversational
base_model:
- Qwen/QwQ-32B
- qihoo360/Light-R1-32B
- qihoo360/TinyR1-32B-Preview
- Qwen/Qwen2.5-32B-Instruct
- OpenPipe/Deductive-Reasoning-Qwen-32B
- Saxo/Linkbricks-Horizon-AI-Avengers-V1-32B
---

[GGUF](https://huggingface.co/mradermacher/QReasoner2.5-32B-Instruct-GGUF) [iMat](https://huggingface.co/mradermacher/QReasoner2.5-32B-Instruct-i1-GGUF)
# QReasoner2.5-32B-Instruct
QReasoner2.5-32B-Instruct excels at deliberate thinking and careful reasoning through complex problems.
It is a *normalized denoised fourier interpolation* with the following configuration:
```yaml
output_base_model: "Qwen/QwQ-32B"
output_dtype: "bfloat16"
finetune_merge:
- { "model": "qihoo360/Light-R1-32B", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 1.0 }
- { "model": "qihoo360/TinyR1-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.9 }
- { "model": "Qwen/Qwen2.5-32B-Instruct", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0 }
- { "model": "OpenPipe/Deductive-Reasoning-Qwen-32B", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.8 }
- { "model": "Saxo/Linkbricks-Horizon-AI-Avengers-V1-32B", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.5 }
```
### What is this?
This is a reasoning-focused model that excels at deliberate thinking and problem-solving. QReasoner combines the strengths of QwQ, different types of R1 distillations, and a few other high quality models.
### Is it good?
It is a thorough thinker that excels at multi-step reasoning problems. The model demonstrates exceptional reasoning capabilities, including complex problem solving and deductive reasoning. It's particularly adept at breaking down complex problems into manageable steps and exploring multiple solution paths before arriving at the optimal answer.
The following is from a IQ4 NL test:
```
<think>
Okay, so the problem is: using a 2-gallon bucket and a 5-gallon bucket, how do you get exactly 4 gallons in one of them. Hmm, I need to figure out the steps. Let me start by recalling similar problems. Usually, with these measuring puzzles, you fill one container and pour into the other, using their capacities to measure specific amounts. Let me think step by step.
First, the buckets are 2 and 5 gallons. The goal is to have exactly 4 gallons in one of them. Since 4 is less than 5, maybe the 5-gallon bucket is the one we need to fill partially? Or perhaps the 2-gallon? Wait, but 4 is twice 2, so maybe using the 2-gallon multiple times? Hmm, not sure yet.
Let me consider possible operations: filling a bucket, emptying it, pouring from one to the other until one is full or the other is empty. Since the buckets are different sizes, their combinations can measure specific amounts.
Let me try starting by filling the 5-gallon bucket. So fill the 5-gallon to the top. Then pour from the 5 into the 2-gallon. Since the 2 can only hold 2, pouring 2 gallons into the 2, leaving 5-2=3 in the 5. Then empty the 2-gallon. Now, the 5 has 3, and the 2 is empty. Then pour the remaining 3 from the 5 into the 2. Wait, but the 2 can hold 2, so pouring 3 into it would only take 2, leaving 1 in the 5. Hmm, that might be a path.
Wait, let's track this step by step.
First step:
1. Fill the 5-gallon bucket. Now 5 has 5, 2 has 0.
2. Pour from 5 into 2 until 2 is full. So 2 gets 2 gallons, so 5 now has 5-2=3. So 5:3, 2:2.
3. Empty the 2-gallon. Now 5:3, 2:0.
4. Pour the 3 gallons from 5 into 2. But the 2 can only take 2, so pouring 3 into 2 would fill it to 2, leaving 1 in the 5. So after step 4: 5 has 1, 2 has 2. Wait, no—if you pour from 5 (which has 3) into 2 (which is empty). The 2 can take 2, so you pour 2 gallons into 2, but the 5 only has 3. Wait, no—if you pour from 5 to 2 until the 2 is full. Wait, the 2 can hold up to 2. So if you have 3 in the 5, and you pour into the 2 which is empty, you can pour 2 into the 2, leaving 1 in the 5. So after pouring, 5 has 1, 2 has 2. Then if you empty the 2 again, then pour the remaining 1 from 5 into 2. Then 2 has 1, and 5 is empty. Then refill the 5, pour into 2 until it's full. Since 2 has 1, you can add 1 more. So pouring 1 gallon from 5 to 2, filling it. Then the 5 would have 5 -1=4. Ah! There we go. Let me map that.
Wait, let me retrace. Let's try this again step by step.
Alternative approach:
Start over:
1. Fill the 5-gallon bucket. (5:5, 2:0)
2. Pour into 2-gallon. So 5 now has 3, 2 has 2. (5:3, 2:2)
3. Empty the 2. (5:3, 2:0)
4. Pour from 5 to 2. Now, the 2 can take 2, but the 5 has 3. So pouring 2 into the 2 would leave 1 in the 5. Wait, but pouring until the 2 is full. So pouring 2 gallons, so 5 gives 2, leaving 1. Then 5:1, 2:2.
Then, empty the 2 again. (5:1, 2:0)
Pour from 5 to 2: the 5 has 1, so pouring that into 2 gives 1 in the 2. Then refill the 5: fill it again. Now, 5:5, 2:1. Then pour from 5 into 2 until it's full. The 2 can take 1 more (since it has 1). So pour 1 gallon from 5 to 2, filling the 2. Then the 5 would have 5 -1 =4. So that works. But that requires multiple steps.
Wait, let's write the steps:
1. Fill 5-gallon. (5,0)
2. Pour into 2-gallon: 5-2=3 in 5, 2 full. (3,2)
3. Empty 2: (3,0)
4. Pour from 5 to 2: 5 has 3, so pouring into 2 which can take 2. So 3-2=1 left in 5, and 2 in 2. So now (1,2)
5. Empty 2 again: (1,0)
6. Pour from 5 to 2: 1 gallon into 2. Now (0,1)
7. Fill 5-gallon again: (5,1)
8. Pour from 5 into 2 until 2 is full. The 2 can take 1 (since it has 1). So pour 1 into it, leaving 5-1=4 in 5. So now 5 has 4, and 2 has 2. So that's 4 in the 5-gallon. So that works. But that's 8 steps, which seems a bit long. But maybe there's a shorter way.
Alternatively, maybe starting by filling the 2-gallon first.
Alternative method:
1. Fill the 2-gallon. (0,2)
2. Pour into 5-gallon. 5 now has 2, 2 is empty. (2,0)
3. Repeat: fill 2 again, pour into 5. Now 5 has 4, 2 has 0. Because 5 can take up to 5, so first pour 2 into 5: 2+2=4. So after two fillings of 2 into 5, you have 4 in the 5. Wait, that might be easier.
Wait, let me see:
Option:
1. Fill the 2-gallon bucket, pour into 5. (2 in 5)
2. Fill 2 again, pour into 5 again. Now 5 has 4, since 2+2=4, which is under 5's capacity. So then the 5-gallon has 4. So that's it? Then you can just fill the 2 twice and pour into 5, resulting in 4 in the 5. Then 4 is the desired amount. So that seems simpler. Wait, but why did I think of the other method before?
Wait, maybe that's the case. Let me check again.
First approach:
If you fill the 2-gallon, pour into 5. Then fill 2 again, pour into 5 again. Now, 5 can hold 5, so after first pour, 2 into 5, 5 has 2. Then second pour, another 2 into 5, making 4. So 5 has 4, and 2 is empty. Wait, no—if you pour from 2 into 5 each time. So first step: Fill 2 (2), pour into 5: 5 has 2. Then fill 2 again (so 2 again), pour into 5. Now 5 has 2 + 2 =4. So then 5 has 4, and 2 is empty. So then, you can just do that. So the 5-gallon would have 4, which is the target. So that's much simpler. So why did I go through the prior longer steps?
Hmm, perhaps I confused it with other problems where the target is different. Wait, maybe that works here. But is that possible?
Wait, but the problem says "fill one of the buckets with exactly 4 gallons". So if you pour two times 2 into the 5, then the 5 has 4, and then you can just take that. So that would be the solution. So why do some other methods require more steps?
Wait, but maybe there is a catch here. Let me check again step by step.
Step 1: Fill the 2-gallon bucket. So 2 is full (2), 5 is empty.
Step 2: Pour from 2 into 5. Now 5 has 2, 2 is empty.
Step 3: Fill 2 again. Now 2 is full again (2), 5 has 2.
Step 4: Pour from 2 into 5 until 5 is full. Wait, but 5 can take up to 5. Currently, 5 has 2. So pouring into 5 from 2: the 5 can take 3 more. But the 2 has 2. So pouring all 2 into 5, resulting in 5 having 4, and 2 empty. So after step 4, 5 has 4, 2 is empty. Therefore, you have 4 in the 5-gallon bucket. So that's done in 4 steps. That seems straightforward. So why did I think of the prior method? Maybe I confused with a problem where you have to measure a different amount, like 3 gallons with 4 and 5, but here the answer might be simpler.
Wait, so is this the solution? If so, then that's the answer. But maybe I'm missing something here.
Wait, the problem is phrased as "how do you fill one of the buckets with exactly 4 gallons". So if you fill the 2 into the 5 twice, then the 5 has 4. So yes. So that's the solution.
But maybe the problem is presented as tricky, but perhaps the straightforward method works here. Hmm.
Alternatively, maybe there is a trick here. Let me see.
Wait, the 5-gallon bucket can hold up to 5, so pouring 2 twice gives 4, so that's correct. So that's possible. Therefore, the steps are:
Fill the 2-gallon bucket twice into the 5-gallon, resulting in 4 gallons in the 5. So that's the answer.
Wait, but maybe the problem is presented in a way that you have only one bucket of each? Wait, the question says "if you have one bucket that holds two gallons and another bucket that holds five gallons", so you have both. So the process is possible by filling the smaller one twice into the larger, getting 4. Hmm. So that would be the case. Then that's the solution. So why did I initially think of the longer steps?
Alternatively, perhaps I made a mistake in thinking that the 5 is being filled from the 2, but actually, maybe the steps are correct.
Alternatively, maybe the problem requires that you have to have exactly 4 in one of the buckets, so if you have the 5 gallon with 4, that's acceptable. So that would be the solution.
Alternatively, maybe the problem is intended to have the 4 in the 5-gallon bucket. So the steps are:
1. Fill the 2-gallon.
2. Pour into 5-gallon.
3. Repeat once more: fill 2, pour into 5. Now 5 has 4. So yes.
Therefore, that's the solution. So why do some sources say that for similar problems you need more steps? Wait, perhaps the confusion comes from different problems. For example, if you wanted 3 gallons with 5 and 3, but here the numbers are different.
Wait, let me verify again with the steps.
First, fill the 2, pour into 5. 2 in 5, 0 in 2.
Fill 2 again, pour into 5. Now, 5 has 4, 2 is empty. So that's it. So that's two transfers. So that's possible. So maybe the answer is that simple.
Alternatively, perhaps I'm missing some constraint. The problem says "fill one of the buckets with exactly 4 gallons". So perhaps you need to have 4 in the 5-gallon bucket, which is possible as above.
Alternatively, maybe you need to have 4 in the 2-gallon, but the 2 can only hold 2, so that's impossible. Therefore, the only way is to have 4 in the 5-gallon. Therefore, the solution is to pour two lots of 2 into it, totaling 4. Hence, that's straightforward.
Hmm. Maybe that's the case. So perhaps the answer is:
Fill the 2-gallon bucket twice into the 5-gallon bucket, resulting in 4 gallons in the 5-gallon.
But let me see if there's another way. For instance, filling the 5 first and pouring into the 2, but that might take longer steps, but maybe the question expects the shorter method. Since the straightforward way works, that's the answer.
Alternatively, perhaps there's a trick here. Let me confirm with an example.
Suppose you have a 5 and 2. To get 4 in the 5.
Yes: Fill 2, pour into 5 (5 has 2). Fill 2 again, pour into 5 again. Now 5 has 4. Done. So that's two fillings and two pourings.
Alternatively, maybe that's the correct answer. So then why did I initially think of the longer method? Maybe I overcomplicated it.
Alternatively, perhaps the problem requires using the 5-gallon bucket to measure, but in that case, the above method is sufficient.
Alternatively, maybe the problem is presented in a different way, but perhaps that's the solution.
Alternatively, perhaps the question is a trick question, where you can just fill the 5-gallon and then pour out 1 gallon somehow, but how?
Wait, if you need 4 in the 5, then pouring out 1. But how? If you have the 5 filled to 5, then pour into the 2 until it's full. So 5 - 2 = 3, but that leaves 3. Hmm, not helpful.
Alternatively, maybe the method of filling the 5, pouring into 2 twice, but that might not get to 4.
Alternatively, maybe my first approach was overcomplicating, and the correct answer is simply filling the 2 into the 5 twice.
Therefore, the answer is:
Fill the 2-gallon bucket twice and pour each time into the 5-gallon bucket. The 5-gallon will then contain exactly 4 gallons.
Alternatively, step-by-step instructions:
1. Fill the 2-gallon bucket.
2. Pour the 2 gallons into the 5-gallon bucket.
3. Fill the 2-gallon bucket again.
4. Pour from the 2-gallon into the 5-gallon until it's full. However, wait—if the 5-gallon can only take 3 more gallons (since it already has 2), but the second pour is 2 gallons again. Wait, but 2 + 2 =4, which is under 5. Wait, hold on—if you pour the second 2 into the 5, the total is 4. Since the 5 can hold up to 5, you can just pour the entire 2 into it. So in that case, the 5 would have 4, and that's it. Therefore, you don't need to pour until the 5 is full. So actually, you can just fill the 2 twice into the 5, resulting in 4. Therefore, that's the answer. Hence, the steps are straightforward.
Alternatively, perhaps there's a gotcha here. Let me check again.
Suppose the problem is to measure exactly 4 gallons using only those two buckets. The standard way is that sometimes you have to fill or empty. But in this case, if you need 4 in the 5-gallon, then yes, just putting two 2s in it gives 4. So that's the simplest solution. Therefore, maybe that's the answer.
Alternatively, maybe I need to think that the user might have intended the other method where you use the 5 to measure, but perhaps the answer is that.
Alternatively, perhaps some people might think that you need to have the 4 in the 2-gallon, but that's impossible since it's only 2 gallons. Hence, the only possible is in the 5. Hence, the steps above.
Alternatively, maybe the problem requires that you have to have 4 in one of them through some other operation. Let me see.
Alternatively, perhaps the problem is presented as "how do you measure 4 gallons", so the answer is as above.
Therefore, the answer is to fill the 2-gallon twice into the 5-gallon, resulting in 4 gallons in the 5.
Alternatively, perhaps the steps should be written as:
- Fill the 5-gallon bucket, pour into the 2-gallon, leaving 3, then empty the 2, pour the 3 into the 2, then fill the 5 again. Hmm, but that might be for a different measurement.
Alternatively, let's see the two methods:
Method 1: Fill 2 into 5 twice, get 4 in 5. Easy.
Method 2: The longer method via filling 5, pouring into 2, etc., ending up with 1 in 5, then doing other steps to get 4. Wait, in the first approach I thought of earlier, you end up with 4 in the 5 after 8 steps, but the second method is shorter.
Therefore, the first method (Method 1) is the correct and simplest way.
Therefore, perhaps the answer is simply filling the 2-gallon twice into the 5, resulting in 4. So the steps are:
1. Fill the 2-gallon bucket.
2. Pour its contents into the 5-gallon bucket.
3. Repeat steps 1 and 2 once more. After the second pour, the 5-gallon bucket will have exactly 4 gallons.
Therefore, that's the solution.
Alternatively, maybe the problem is presented in a way that you have to use the buckets without having an unlimited water supply. Wait, the question says "fill one of the buckets with exactly 4 gallons". So perhaps you can only use the two buckets and the water source. So assuming you have a tap or something to fill them. So you can fill from the source, so the above method is possible.
Alternatively, maybe the problem is similar to the classic 4 gallons with 3 and 5, but here with 2 and 5. Hmm. Let me see.
Wait, in the classic problem, with a 3 and 5, to get 4, you might do different steps, but here with 2 and 5, it's easier.
Alternatively, perhaps I need to confirm once again.
Suppose:
Start with empty buckets.
Step 1: Fill the 2-gallon from the source.
Step 2: Pour the 2 into the 5. Now 5 has 2, 2 is empty.
Step 3: Fill the 2 again.
Step 4: Pour into the 5 until it's full. Wait, the 5 can take 5 - 2 =3 more. But the 2 has 2. So you can pour all 2 into 5, making 5 have 4, and 2 empty. So after step 4, the 5 has 4. Therefore, that's the desired amount. Therefore, yes, exactly 4 in the 5-gallon.
Therefore, the answer is that. Hence, the steps are as such. So this seems correct.
Alternatively, perhaps the confusion is that if you fill the 5 first, but the above method is more straightforward.
Therefore, the solution is to fill the 2-gallon twice into the 5, resulting in 4. Therefore, the answer is the steps outlined.
</think>
To measure exactly 4 gallons using a 2-gallon and a 5-gallon bucket, follow these steps:
1. **Fill the 2-gallon bucket** completely.
2. **Pour the 2 gallons into the 5-gallon bucket**. The 5-gallon bucket now has 2 gallons.
3. **Fill the 2-gallon bucket again**.
4. **Pour the second 2 gallons into the 5-gallon bucket**. The 5-gallon bucket now holds **4 gallons total** (2 + 2).
The 5-gallon bucket will now contain exactly **4 gallons**.
**Steps summarized:**
- Fill the 2-gallon bucket twice and transfer each portion to the 5-gallon bucket.
- After two transfers, the 5-gallon bucket will have **4 gallons**.
This method works because adding 2 gallons twice fills the 5-gallon bucket to 4 gallons without exceeding its capacity.
✅ **Final Result:** The 5-gallon bucket contains **4 gallons**.
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qreasoner2.5-32b-instruct,
title = {QReasoner2.5-32B-Instruct},
url = {https://huggingface.co/maldv/QReasoner2.5-32B-Instruct},
author = {Praxis Maldevide},
month = {March},
year = {2025}
}
```
|
mradermacher/RxCodexV1-mini-GGUF | mradermacher | 2025-05-03T17:31:21Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:rxmha125/RxCodexV1-mini",
"base_model:quantized:rxmha125/RxCodexV1-mini",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T17:30:02Z | ---
base_model: rxmha125/RxCodexV1-mini
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rxmha125/RxCodexV1-mini
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RxCodexV1-mini-GGUF/resolve/main/RxCodexV1-mini.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Phi-4-reasoning-GGUF | mradermacher | 2025-05-03T17:31:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"reasoning",
"en",
"base_model:microsoft/Phi-4-reasoning",
"base_model:quantized:microsoft/Phi-4-reasoning",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T18:14:22Z | ---
base_model: microsoft/Phi-4-reasoning
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-reasoning/resolve/main/LICENSE
quantized_by: mradermacher
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- reasoning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/microsoft/Phi-4-reasoning
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-4-reasoning-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q2_K.gguf) | Q2_K | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.IQ4_XS.gguf) | IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q5_K_M.gguf) | Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-GGUF/resolve/main/Phi-4-reasoning.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Protobase-SCE1-LLaMa-70B-GGUF | mradermacher | 2025-05-03T17:30:24Z | 120 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksGraveyard/Protobase-SCE1-LLaMa-70B",
"base_model:quantized:TareksGraveyard/Protobase-SCE1-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-23T13:22:37Z | ---
base_model: TareksGraveyard/Protobase-SCE1-LLaMa-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TareksGraveyard/Protobase-SCE1-LLaMa-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Protobase-SCE1-LLaMa-70B-GGUF/resolve/main/Protobase-SCE1-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF | mradermacher | 2025-05-03T17:29:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-II",
"base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-II",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T10:15:07Z | ---
base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-II
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-II
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-II-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-II.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fats-fme/76a35825-b79e-4b60-87d3-4641e40e4228 | fats-fme | 2025-05-03T17:26:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T17:19:46Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 76a35825-b79e-4b60-87d3-4641e40e4228
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c46256a8024748c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c46256a8024748c_train_data.json
type:
field_input: text
field_instruction: instruction
field_output: full_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/76a35825-b79e-4b60-87d3-4641e40e4228
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c46256a8024748c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 62e2631f-f12b-4415-81fd-ab3b8d09115c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 62e2631f-f12b-4415-81fd-ab3b8d09115c
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 76a35825-b79e-4b60-87d3-4641e40e4228
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.3539 |
| 0.2766 | 0.0214 | 100 | 0.2475 |
| 0.0211 | 0.0427 | 200 | 0.0252 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fedovtt/0dc128ba-76b6-4492-a16d-598bf33e3901 | fedovtt | 2025-05-03T17:25:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T17:19:39Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0dc128ba-76b6-4492-a16d-598bf33e3901
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 9c46256a8024748c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c46256a8024748c_train_data.json
type:
field_input: text
field_instruction: instruction
field_output: full_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: fedovtt/0dc128ba-76b6-4492-a16d-598bf33e3901
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/9c46256a8024748c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 62e2631f-f12b-4415-81fd-ab3b8d09115c
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 62e2631f-f12b-4415-81fd-ab3b8d09115c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0dc128ba-76b6-4492-a16d-598bf33e3901
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9199 | 0.0400 | 150 | 1.2183 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phililp-arnold/17967b4f-3241-42f4-bb16-d8895524f65f | phililp-arnold | 2025-05-03T17:25:00Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"region:us"
] | null | 2025-05-03T17:24:31Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
model-index:
- name: phililp-arnold/17967b4f-3241-42f4-bb16-d8895524f65f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phililp-arnold/17967b4f-3241-42f4-bb16-d8895524f65f
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
mlx-community/Qwen3-16B-A3B-bf16 | mlx-community | 2025-05-03T17:22:25Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"base_model:kalomaze/Qwen3-16B-A3B",
"base_model:finetune:kalomaze/Qwen3-16B-A3B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-03T17:11:54Z | ---
license: apache-2.0
base_model: kalomaze/Qwen3-16B-A3B
tags:
- mlx
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Qwen3-16B-A3B-bf16
This model [mlx-community/Qwen3-16B-A3B-bf16](https://huggingface.co/mlx-community/Qwen3-16B-A3B-bf16) was
converted to MLX format from [kalomaze/Qwen3-16B-A3B](https://huggingface.co/kalomaze/Qwen3-16B-A3B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-16B-A3B-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
dimasik1987/aef1db10-27c8-43af-8b53-76962e458a20 | dimasik1987 | 2025-05-03T17:22:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T17:19:46Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aef1db10-27c8-43af-8b53-76962e458a20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 9c46256a8024748c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c46256a8024748c_train_data.json
type:
field_input: text
field_instruction: instruction
field_output: full_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik1987/aef1db10-27c8-43af-8b53-76962e458a20
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/9c46256a8024748c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 62e2631f-f12b-4415-81fd-ab3b8d09115c
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 62e2631f-f12b-4415-81fd-ab3b8d09115c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# aef1db10-27c8-43af-8b53-76962e458a20
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0966 | 0.0400 | 150 | 1.4316 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SvetKochnev/wan2.1_1.3b_female_anatomy | SvetKochnev | 2025-05-03T17:15:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T17:10:28Z | ---
license: apache-2.0
---
this lora helps create slightly more correct anatomical features for any woman.
this is probably one of the first lora trains for a 1.3b model. so there is no place to look for settings or better variations yet. this train is hastily done and may not give a perfect result. nevertheless, it improves generations.
I will continue experimenting with this. I will be glad to get any feedback, especially on your generation results.
If you wanna help, you can send me a link to any cloud file with short videos or photos on this topic. or maybe something else, it's you turn.
Trigger word is ntm. |
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF | Triangle104 | 2025-05-03T17:14:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"base_model:quantized:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:14:21Z | ---
base_model: huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated`](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -c 2048
```
|
lamdo/bert-base-uncased-phrase-60kaddedphrasesfroms2orc-mlm-150000steps | lamdo | 2025-05-03T17:09:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-03T17:09:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PeterL123/GLM-Z1-9B-0414 | PeterL123 | 2025-05-03T17:05:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"zh",
"en",
"arxiv:2406.12793",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:04:11Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-Z1-9B-0414
## Introduction
The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Performance
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png">
</p>
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png">
</p>
## Model Usage Guidelines
### I. Sampling Parameters
| Parameter | Recommended Value | Description |
| ------------ | ----------------- | -------------------------------------------- |
| temperature | **0.6** | Balances creativity and stability |
| top_p | **0.95** | Cumulative probability threshold for sampling|
| top_k | **40** | Filters out rare tokens while maintaining diversity |
| max_new_tokens | **30000** | Leaves enough tokens for thinking |
### II. Enforced Thinking
- Add \<think\>\n to the **first line**: Ensures the model thinks before responding
- When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior
### III. Dialogue History Trimming
- Retain only the **final user-visible reply**.
Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja`
### IV. Handling Long Contexts (YaRN)
- When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling)
- In supported frameworks, add the following snippet to `config.json`:
```json
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
```
- **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed.
## Inference Code
Make Sure Using `transforemrs>=4.51.3`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-Z1-9B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}]
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 4096,
"do_sample": False,
}
out = model.generate(**generate_kwargs)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
```
## Citations
If you find our work useful, please consider citing the following paper.
```
@misc{glm2024chatglm,
title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools},
author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
year={2024},
eprint={2406.12793},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
TOMFORD79/Fly50 | TOMFORD79 | 2025-05-03T16:58:32Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T16:48:43Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Kasougi/gpt-neo-1.3B-sft-se-4000-lora | Kasougi | 2025-05-03T16:56:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T16:56:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aaquib/gemma-3-1b-sft-alpaca | Aaquib | 2025-05-03T16:56:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:50:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
SFT'd version of google/gemma-3-1b-pt. Training performed solely on yahma/alpaca-cleaned. No further learning was performed.
## Model Details
Hyperparameters to replicate:
- lr=1e-5
- num_epochs=1
- train_batch_size=40
- test_batch_size=32
- max_seq_len=256
### Model Description
- **Finetuned from model:** [google/gemma-3-1b-pt] |
earcherc/nsfw1_full | earcherc | 2025-05-03T16:56:24Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-03T16:51:51Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/ComfyICU_00001_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# nsfw1_full
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/earcherc/nsfw1_full/tree/main) them in the Files & versions tab.
|
mlfoundations-dev/no_pipeline_science_3k | mlfoundations-dev | 2025-05-03T16:55:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:42:08Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: no_pipeline_science_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_pipeline_science_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_science_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 96
- total_eval_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
gpham/ModernBERT-base-finetuned-arxiv | gpham | 2025-05-03T16:53:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T08:50:32Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ModernBERT-base-finetuned-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernBERT-base-finetuned-arxiv
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6179
- Accuracy: 0.8501
- F1: 0.8474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4682 | 1.0 | 1863 | 0.4426 | 0.8462 | 0.8393 |
| 0.2739 | 2.0 | 3726 | 0.4828 | 0.8538 | 0.8464 |
| 0.1399 | 3.0 | 5589 | 0.7007 | 0.8421 | 0.8399 |
| 0.0533 | 4.0 | 7452 | 0.9504 | 0.8472 | 0.8441 |
| 0.007 | 5.0 | 9315 | 1.3058 | 0.8392 | 0.8371 |
| 0.0097 | 6.0 | 11178 | 1.4712 | 0.8411 | 0.8395 |
| 0.0044 | 7.0 | 13041 | 1.5351 | 0.8421 | 0.8436 |
| 0.0004 | 8.0 | 14904 | 1.5931 | 0.8499 | 0.8459 |
| 0.0 | 9.0 | 16767 | 1.6332 | 0.8470 | 0.8442 |
| 0.0 | 10.0 | 18630 | 1.6179 | 0.8501 | 0.8474 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
adi0308/output | adi0308 | 2025-05-03T16:52:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-05-03T16:51:09Z | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu118
- Datasets 3.5.1
- Tokenizers 0.21.1 |
malik-nazeba-minahil-malik-Viral-Video/18-EXCLUSIVE.malik.nazeba.minahil.malik.viral.video.Leaks.original | malik-nazeba-minahil-malik-Viral-Video | 2025-05-03T16:51:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T16:43:07Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/4du2u735?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Dive into the phenomenon of Malik Nazeba Minahil Malik's viral video. This comprehensive analysis examines its origins, the role of social media platforms, public reactions, and the ethical dimensions of viral content. Understand the dynamics of virality and how it shapes our digital interactions.
|
ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.001 | ASethi04 | 2025-05-03T16:49:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:41:11Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/iomjqekl)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF | mradermacher | 2025-05-03T16:42:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT1-Gen13-gemma-2-9B",
"base_model:quantized:zelk12/MT1-Gen13-gemma-2-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-03T10:32:34Z | ---
base_model: zelk12/MT1-Gen13-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/zelk12/MT1-Gen13-gemma-2-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LarryAIDraw/shun-nvwls-v1 | LarryAIDraw | 2025-05-03T16:42:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-03T16:11:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/155063/sunohara-shun-blue-archive-lora |
strangerzonehf/Flux-Midjourney-Studio-LoRA | strangerzonehf | 2025-05-03T16:42:00Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T10:41:07Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'mj x1, a close-up shot of a man with short platinum blonde hair and olive skin, dressed in a black crew neck t-shirt. He has a silver nose ring and a small teardrop tattoo under his left eye. The studio lighting casts a soft shadow under his chin, giving dimension to his jawline. The background is a gradient from pale blue to white.'
output:
url: images/1111.png
- text: 'mj x1, A close-up of a young man with dark brown eyes and wavy black hair. He is wearing a dark green trench coat with a high collar and a light brown scarf around his neck. The backdrop is a cloudy gray, adding an air of mystery to the scene.'
output:
url: images/2222.png
- text: 'mj x1, a close-up shot of a young mans face is adorned with a beige baseball cap adorned with red lettering. The mans eyes are a piercing blue, and he is wearing a pink t-shirt. His hair is dark brown, adding a touch of texture to his face. The backdrop is a vibrant shade of blue, creating a stark contrast to the mans head and the cap.'
output:
url: images/3333.png
- text: 'mj x1, cinematic shot, a medium-sized man stands in the foreground of the frame. He is dressed in a black jacket, black pants, and a black backpack. His hair is short and dark, and hes looking off to the side. The backdrop is a stark white wall, and the ceiling is adorned with metal pipes and beams. To the left of the man, a man with dark hair and glasses is looking off into the distance.'
output:
url: images/4444.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mj x1
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---

# Flux-Midjourney-Studio-LoRA
<Gallery />
# Model description for Flux-Midjourney-Studio-LoRA
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 30 & 3900 |
| Epoch | 25 | Save Every N Epochs | 1 [90] |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 27 [ Midjourney Generated Synthetic Image ]
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1280 x 832 | 3:2 | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 30-35
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Flux-Midjourney-Studio-LoRA"
trigger_word = "mj x1"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `mj x1` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/strangerzonehf/Flux-Midjourney-Studio-LoRA/tree/main) them in the Files & versions tab. |
LarryAIDraw/bluearchive-shun-ponyxl-lora-nochekaiser | LarryAIDraw | 2025-05-03T16:41:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-03T16:11:02Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/801542/shun-blue-archive-commission |
LarryAIDraw/ChamEyjafjallaPonyXL | LarryAIDraw | 2025-05-03T16:41:19Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-03T16:09:49Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/426460/eyjafjalla-3-outfits-or-arknights-or-pony-xl |
Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF | Triangle104 | 2025-05-03T16:41:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/QWQ-32B-Dawnwhisper",
"base_model:quantized:DoppelReflEx/QWQ-32B-Dawnwhisper",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T16:39:44Z | ---
base_model: DoppelReflEx/QWQ-32B-Dawnwhisper
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF
This model was converted to GGUF format from [`DoppelReflEx/QWQ-32B-Dawnwhisper`](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -c 2048
```
|
LarryAIDraw/touhou_kudamaki_ponyXL | LarryAIDraw | 2025-05-03T16:41:08Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-03T16:09:16Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/371928/ponyv6-xl-tsukasa-kudamaki-or-touhou |
mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF | mradermacher | 2025-05-03T16:39:58Z | 191 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksTesting/Alkahest-V9.3-LLaMa-70B",
"base_model:quantized:TareksTesting/Alkahest-V9.3-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T03:36:57Z | ---
base_model: TareksTesting/Alkahest-V9.3-LLaMa-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TareksTesting/Alkahest-V9.3-LLaMa-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pictgencustomer/coneyisland2-14_425 | pictgencustomer | 2025-05-03T16:38:24Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T16:38:21Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: coneyisland2-14_michaeluffer_6
---
# Coneyisland2 14_425
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `coneyisland2-14_michaeluffer_6` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgencustomer/coneyisland2-14_425', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF | mradermacher | 2025-05-03T16:36:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-test",
"base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-test",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T11:39:24Z | ---
base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-test
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-test
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-test-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-test.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zhangchen1991/search-r1_step500 | zhangchen1991 | 2025-05-03T16:33:56Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T14:47:54Z | ---
license: apache-2.0
---
|
Photchara/stock-sentiment-analysis_V1 | Photchara | 2025-05-03T16:31:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"base_model:ProsusAI/finbert",
"base_model:finetune:ProsusAI/finbert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T16:03:33Z | ---
library_name: transformers
base_model:
- ProsusAI/finbert
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
MODEL_NAME = "ProsusAI/finbert"
MAX_LENGTH = 512
BATCH_SIZE = 16
EPOCHS = 3
LEARNING_RATE = 2e-5
'eval_loss': 0.3165287375450134, 'eval_accuracy': 0.8761904761904762, 'eval_f1': 0.8807109047522269
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Membersuger/Euro_20 | Membersuger | 2025-05-03T16:29:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:20:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/no_pipeline_code_1k | mlfoundations-dev | 2025-05-03T16:28:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:41:49Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: no_pipeline_code_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_pipeline_code_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_code_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 96
- total_eval_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Triangle104/QWQ-32B-Dawnwhisper-Q3_K_L-GGUF | Triangle104 | 2025-05-03T16:26:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/QWQ-32B-Dawnwhisper",
"base_model:quantized:DoppelReflEx/QWQ-32B-Dawnwhisper",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T16:24:48Z | ---
base_model: DoppelReflEx/QWQ-32B-Dawnwhisper
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/QWQ-32B-Dawnwhisper-Q3_K_L-GGUF
This model was converted to GGUF format from [`DoppelReflEx/QWQ-32B-Dawnwhisper`](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_L-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_L-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_L-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_L-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_l.gguf -c 2048
```
|
atamilagba/hug | atamilagba | 2025-05-03T16:25:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T16:25:32Z | ---
license: apache-2.0
---
|
abbas7733/2024 | abbas7733 | 2025-05-03T16:24:04Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T16:24:04Z | ---
license: apache-2.0
---
|
jahyungu/Llama-3.1-8B-Instruct_MetaMathQA-40K_9 | jahyungu | 2025-05-03T16:19:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:11:35Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct_MetaMathQA-40K_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct_MetaMathQA-40K_9
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
ai-and-society/llama-3.1-8B-Instruct-SQINT8 | ai-and-society | 2025-05-03T16:19:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2025-05-03T16:16:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hachipo/OpenCoder-8B-Base-PIFT-jaen_1000_2 | Hachipo | 2025-05-03T16:19:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:15:11Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_llm2_gen7_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-03T16:15:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T16:14:49Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001 | ASethi04 | 2025-05-03T16:13:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:29:38Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-first-lora-4-0.0001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/tzwksac9)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HuangXinBa/marian-finetuned-kde4-en-to-zh_CN | HuangXinBa | 2025-05-03T16:09:30Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"marian",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:zhijian12345/marian-finetuned-kde4-en-to-zh_CN",
"base_model:finetune:zhijian12345/marian-finetuned-kde4-en-to-zh_CN",
"license:apache-2.0",
"region:us"
] | translation | 2025-05-02T18:20:35Z | ---
license: apache-2.0
base_model: zhijian12345/marian-finetuned-kde4-en-to-zh_CN
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-zh_CN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh_CN
This model is a fine-tuned version of [zhijian12345/marian-finetuned-kde4-en-to-zh_CN](https://huggingface.co/zhijian12345/marian-finetuned-kde4-en-to-zh_CN) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 3.5.0
- Tokenizers 0.15.2
|
endlesseve/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_whistling_badger | endlesseve | 2025-05-03T16:04:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fast whistling badger",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T08:19:59Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_whistling_badger
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fast whistling badger
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_whistling_badger
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="endlesseve/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fast_whistling_badger", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Poseidon/Foundation-Sec-8B-Q8_0-GGUF | Poseidon | 2025-05-03T16:03:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"security",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:fdtn-ai/Foundation-Sec-8B",
"base_model:quantized:fdtn-ai/Foundation-Sec-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:01:29Z | ---
base_model: fdtn-ai/Foundation-Sec-8B
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- security
- llama-cpp
- gguf-my-repo
---
# Poseidon/Foundation-Sec-8B-Q8_0-GGUF
This model was converted to GGUF format from [`fdtn-ai/Foundation-Sec-8B`](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Poseidon/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Poseidon/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Poseidon/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Poseidon/Foundation-Sec-8B-Q8_0-GGUF --hf-file foundation-sec-8b-q8_0.gguf -c 2048
```
|
Anchor-Chitra-Tripathi-Viral-Video/wATCH.Anchor-Chitra-Tripathi-Viral-Anchor-Chitra-Tripathi.original | Anchor-Chitra-Tripathi-Viral-Video | 2025-05-03T16:03:07Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T15:57:29Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Anchor-Chitra-Tripathi)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Anchor-Chitra-Tripathi)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Anchor-Chitra-Tripathi) |
X-iZhang/libra-v1.0-7b | X-iZhang | 2025-05-03T16:00:29Z | 117 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"libra",
"text-generation",
"RRG",
"Radiology Report Generation",
"Chest X-ray",
"Multimodal Large Language Models",
"image-text-to-text",
"arxiv:2411.19378",
"base_model:epfl-llm/meditron-7b",
"base_model:merge:epfl-llm/meditron-7b",
"base_model:microsoft/rad-dino",
"base_model:merge:microsoft/rad-dino",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-12-20T16:02:33Z | ---
license: apache-2.0
pipeline_tag: image-text-to-text
base_model:
- epfl-llm/meditron-7b
- microsoft/rad-dino
base_model_relation: merge
library_name: transformers
tags:
- RRG
- Radiology Report Generation
- Chest X-ray
- Multimodal Large Language Models
---
<br>
# **Libra Model Card**
**Version**: Libra-v1.0
## Overview
**Libra** is a multimodal Large Language Model (LLM) specialized in **radiology report generation**, particularly **chest X-ray** interpretations. It can produce detailed _Findings_ sections with **temporal comparisons** (e.g., comparing a current chest X-ray with prior ones). Libra integrates the following key components:
- **RAD-DINO**: A vision encoder pre-trained on medical imaging datasets for robust feature extraction from chest X-rays.
- **Meditron-7B**: A 7B-parameter large language model (based on Llama-2) specialized in medical text generation.
- **Temporal Alignment Connector (TAC)**: A custom adapter that fuses features across multiple time points to enable temporal comparisons.
This model card provides an overview of Libra’s architecture, training methodology, limitations, and recommended usage guidelines.
---
## Paper and Resources
For more detailed information regarding Libra’s methodology, theoretical foundation, and performance benchmarks, please refer to the following resources:
- **Project Website**: [Libra v1.0](https://x-izhang.github.io/Libra_v1.0/)
- **Paper**: [arXiv:2411.19378](https://arxiv.org/abs/2411.19378)
- **Code Repository**: [X-iZhang/Libra (GitHub)](https://github.com/X-iZhang/Libra)
Or check out our Spaces demo! [](https://huggingface.co/spaces/X-iZhang/Libra)
---
## Training Strategy
Libra is trained in a **two-stage process**:
1. **Temporal Feature Alignment**
- Trains TAC to effectively fuse and align features from different time points (current and previous chest X-rays).
- Focuses on capturing notable changes (e.g., appearance or progression of opacities, devices, and lines).
2. **Fine-Tuning for Radiology Report Generation**
- The language model part is fine-tuned on a large dataset of paired chest X-ray images and radiology reports.
- Emphasizes the generation of the _Findings_ section, especially incorporating temporal descriptors.
---
## Intended Use
Libra is primarily designed to **assist** clinical practitioners, researchers, and medical students in generating chest X-ray reports. Key applications include:
- **Clinical Decision Support**: Providing draft findings that can be refined by a radiologist.
- **Educational Tool**: Demonstrating example interpretations and temporal changes for training radiology residents.
- **Research**: Facilitating studies on automated report generation and temporal feature learning in medical imaging.
> **Important**: Outputs should be reviewed by qualified radiologists or medical professionals before final clinical decisions are made.
---
## Limitations and Recommendations
1. **Data Bias**: The model’s performance may be less reliable for underrepresented demographics or rare pathologies.
2. **Clinical Oversight**: Always involve a medical professional to verify the results—Libra is not a substitute for professional judgment.
3. **Temporal Inaccuracies**: Despite TAC’s focus on temporal alignment, subtle or uncommon changes may go unrecognized.
4. **Generalization**: Libra’s performance on chest X-ray types or conditions not seen during training may be limited.
---
## Ethical Considerations
- **Patient Privacy**: Ensure the data is fully de-identified and compliant with HIPAA/GDPR (or relevant privacy regulations).
- **Responsible Use**: Deploy Libra’s outputs carefully; they are not guaranteed to be error-free.
- **Accountability**: Users and organizations must assume responsibility for verifying clinical accuracy and safety.
---
## How to Cite ✒️
If you use Libra in academic or research contexts, please cite:
```bibtex
@misc{zhang2025libraleveragingtemporalimages,
title={Libra: Leveraging Temporal Images for Biomedical Radiology Analysis},
author={Xi Zhang and Zaiqiao Meng and Jake Lever and Edmond S. L. Ho},
year={2025},
eprint={2411.19378},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.19378},
}
```
## Disclaimer:
This tool is for research and educational purposes only. It is not FDA-approved or CE-marked for clinical use. Users should consult qualified healthcare professionals for any clinical decisions. |
Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb | Romain-XV | 2025-05-03T15:59:04Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:09:42Z | ---
base_model: unsloth/Phi-3-mini-4k-instruct
library_name: transformers
model_name: 8f3edba3-d660-4f05-bc84-9befdb0b2deb
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for 8f3edba3-d660-4f05-bc84-9befdb0b2deb
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/8bqxlpyz)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bihungba1101/json_segmenting_sft_warmup_qwen | bihungba1101 | 2025-05-03T15:57:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:57:03Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bihungba1101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TOMFORD79/Fly44 | TOMFORD79 | 2025-05-03T15:56:35Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T15:15:34Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Fly43 | TOMFORD79 | 2025-05-03T15:56:22Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T15:15:28Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Fly42 | TOMFORD79 | 2025-05-03T15:56:21Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T15:15:22Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
piotr-ai/polanka_4b_v0.1_qwen3_250502_gguf | piotr-ai | 2025-05-03T15:52:25Z | 0 | 0 | null | [
"gguf",
"language",
"text-generation",
"pl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-02T07:35:13Z | ---
license: apache-2.0
language:
- pl
- en
pipeline_tag: text-generation
tags:
- language
---
temp: 0.6
top-k: 40
**example 1:**
```
<|im_start|>system
Poniżej znajduje się dialog, w którym użytkownik rozmawia z asystentką o imieniu Polanka.
Polanka jest pomocna i uprzejma.<|im_end|>
> co to jest psychologia?
Psychologia to dziedzina nauki, która zajmuje się badaniem ludzkiego umysłu, zachowań, emocji, motywacji i procesów poznawczych. Psychologia bada, jak ludzie myślą, uczą się, doświadczają i reagują na swoje otoczenie. Jest to dyscyplina, która łączy w sobie elementy z różnych dziedzin, takich jak:
1. **Biologia**: Badanie biologicznych podstaw ludzkiego zachowania i poznawania.
2. **Neurobiologia**: Badanie struktury i funkcji mózgu oraz ich wpływu na zachowanie.
3. **Psychologia behawioralna**: Badanie zachowań poprzez obserwację i interwencję.
4. **Psychologia poznawcza**: Badanie procesów myślowych, takich jak pamięć, uczenie się, rozwiązywanie problemów.
5. **Psychologia społeczna**: Badanie wpływu społeczeństwa na zachowanie i postrzeganie siebie.
6. **Psychologia rozwoju**: Badanie rozwoju osobowości, poznawczego i emocjonalnego w czasie.
7. **Psychologia kliniczna**: Badanie i leczenie zaburzeń psychicznych oraz problemów emocjonalnych.
8. **Psychologia kulturowa**: Badanie wpływu kultury na ludzkie zachowanie i postrzeganie siebie.
9. **Psychologia etyczna**: Badanie etycznych aspektów psychologii i jej zastosowań.
Psychologia ma kluczowe znaczenie w rozumieniu ludzkich zachowań, motywacji i emocji, a także w rozwoju terapii i leczenia zaburzeń psychicznych.
```
**example 2:**
wyjaśnij krok po kroku ile to jest 12 / 3 + 2
```
Oczywiście, wyjaśnię ci to krok po kroku:
1. Rozpocznijmy od pierwszego działania: 12 / 3.
- 12 podzielone przez 3 to 4, ponieważ 3 razy 4 to 12.
2. Teraz dodajemy do wyniku 4 liczbę 2.
- 4 + 2 = 6.
Więc 12 / 3 + 2 = 6.
```
**example 3:**
napisz funkcję w pythonie, która mnoży liczbę razy 3
````
Oto funkcja w Pythonie, która mnoży liczbę przez 3:
```python
def mnozenie_liczby_3(liczba):
"""
Funkcja mnoży liczbę przez 3.
Parameters:
liczba (int): Liczba, którą chcemy pomnożyć.
Returns:
int: Wynik mnożenia liczby przez 3.
"""
wynik = liczba * 3
return wynik
# Przykłady użycia:
liczba = 5
wynik = mnozenie_liczby_3(liczba)
print(f"Wynik mnożenia 5 przez 3 to: {wynik}")
liczba = 0
wynik = mnozenie_liczby_3(liczba)
print(f"Wynik mnożenia 0 przez 3 to: {wynik}")
```
W tej funkcji `liczba` to parametr, który otrzymujemy podczas wywołania funkcji. Funkcja zwraca wynik mnożenia liczby przez 3. W przykładach użycia możemy przypisać do zmiennej liczbę, którą chcemy pomnożyć, a następnie wywołać funkcję z tą liczbą jako argumentem i wyświetlić wynik.
````
**example 4:**
przetłumacz na język angielski: "Liczba mórz na Oceanie Spokojnym w stosunku do jego powierzchni nie jest zbyt wielka, ponieważ większa część jego obszaru to po prostu ocean, bez miejsc, w których można wyznaczyć morza."
```
"Number of seas in the Pacific Ocean relative to its surface isn't too great because the majority of its area is just ocean, with no places where you can identify a sea."
``` |
ping98k/qwen3-8b-recall-writer-1e | ping98k | 2025-05-03T15:47:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:46:51Z | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ping98k
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
franzexplorer77/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout | franzexplorer77 | 2025-05-03T15:43:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am alert armored trout",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:54:42Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am alert armored trout
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="franzexplorer77/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Docty/dreambooth-dog-lora | Docty | 2025-05-03T15:42:38Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-05-03T15:17:56Z | ---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: A photo of sks dog
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - Docty/dreambooth-dog-lora
These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on A photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
oferk/ppo-LunarLander-v2 | oferk | 2025-05-03T15:41:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-03T15:38:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.46 +/- 22.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StrangeGhost/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-prickly_silent_ostrich | StrangeGhost | 2025-05-03T15:39:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am prickly silent ostrich",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:39:21Z | ---
base_model: Gensyn/Qwen2.5-72B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-prickly_silent_ostrich
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am prickly silent ostrich
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-prickly_silent_ostrich
This model is a fine-tuned version of [Gensyn/Qwen2.5-72B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-72B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="StrangeGhost/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-prickly_silent_ostrich", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lmganon123/THUDM_GLM-Z1-32B-0414-exl3-4.0bpw | lmganon123 | 2025-05-03T15:35:15Z | 0 | 0 | null | [
"safetensors",
"glm4",
"base_model:THUDM/GLM-Z1-32B-0414",
"base_model:quantized:THUDM/GLM-Z1-32B-0414",
"4-bit",
"exl3",
"region:us"
] | null | 2025-05-03T14:19:10Z | ---
base_model:
- THUDM/GLM-Z1-32B-0414
---
I wish more people would make exl3 quants. I probably will be making some for 24GB VRAM.
## Prompt format
```
[gMASK]<sop><|system|>
{system_prompt}<|user|>
{prompt}<|assistant|>
<think>
``` |
mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF | mradermacher | 2025-05-03T15:34:24Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3",
"base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T10:47:58Z | ---
base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-3-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-3.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF | Triangle104 | 2025-05-03T15:34:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-03T15:32:25Z | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
---
Llama-3.1-Nemotron-Nano-8B-v1 is a large language model (LLM) which is a derivative of Meta Llama-3.1-8B-Instruct
(AKA the reference model). It is a reasoning model that is post trained
for reasoning, human chat preferences, and tasks, such as RAG and tool
calling.
Llama-3.1-Nemotron-Nano-8B-v1 is a model which offers a great
tradeoff between model accuracy and efficiency. It is created from Llama
3.1 8B Instruct and offers improvements in model accuracy. The model
fits on a single RTX GPU and can be used locally. The model supports a
context length of 128K.
This model underwent a multi-phase post-training process to enhance
both its reasoning and non-reasoning capabilities. This includes a
supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling
as well as multiple reinforcement learning (RL) stages using REINFORCE
(RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms
for both chat and instruction-following. The final model checkpoint is
obtained after merging the final SFT and Online RPO checkpoints.
Improved using Qwen.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -c 2048
```
|
mafzaal/finetuned_arctic_ft | mafzaal | 2025-05-03T15:33:46Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-03T15:33:03Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: Which multi-modal models were released by significant vendors in
2024, and in which months did they appear?
sentences:
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'In 2024, almost every significant model vendor released multi-modal models. We
saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images,
audio and video), then September brought Qwen2-VL and Mistral’s Pixtral 12B and
Meta’s Llama 3.2 11B and 90B vision models. We got audio input and output from
OpenAI in October, then November saw SmolVLM from Hugging Face and December saw
image and video models from Amazon Nova.
In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
It now has plugins for a whole collection of different vision models.'
- 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
available from its launch in June. This was a momentus change, because for the
previous year free users had mostly been restricted to GPT-3.5 level models, meaning
new users got a very inaccurate mental model of what a capable LLM could actually
do.
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
Pro. This $200/month subscription service is the only way to access their most
capable model, o1 Pro.
Since the trick behind the o1 series (and the future models it will undoubtedly
inspire) is to expend more compute time to get better results, I don’t think those
days of free access to the best available models are likely to return.'
- source_sentence: How is a prompt without evals, models, and UX compared in the given
context?
sentences:
- 'The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the infrastructure
that is imagined to be necessary for these models in the future.
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
dollars rolling out new datacenters, with a very material impact on the electricity
grid and the environment. There’s even talk of spinning up new nuclear power stations,
but those can take decades.
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
crash in LLM prices might hint that it’s not. But would you want to be the big
tech executive that argued NOT to build out this infrastructure only to be proven
wrong in a few years’ time?'
- 'When @v0 first came out we were paranoid about protecting the prompt with all
kinds of pre and post processing complexity.
We completely pivoted to let it rip. A prompt without the evals, models, and especially
UX is like getting a broken ASML machine without a manual'
- 'The boring yet crucial secret behind good system prompts is test-driven development.
You don’t write down a system prompt and find ways to test it. You write down
tests and find a system prompt that passes them.
It’s become abundantly clear over the course of 2024 that writing good automated
evals for LLM-powered systems is the skill that’s most needed to build useful
applications on top of these models. If you have a strong eval suite you can adopt
new models faster, iterate better and build more reliable and useful product features
than your competition.
Vercel’s Malte Ubl:'
- source_sentence: How did the construction of railways in the 1800s impact the environment?
sentences:
- 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
models currently available, significantly bigger than the largest of Meta’s Llama
series, Llama 3.1 405B.
Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot
Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models.
This is by far the highest ranking openly licensed model.
The really impressive thing about DeepSeek v3 is the training cost. The model
was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
that benchmarks slightly worse.'
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'So far, I think they’re a net positive. I’ve used them on a personal level to
improve my productivity (and entertain myself) in all sorts of different ways.
I think people who learn how to use them effectively can gain a significant boost
to their quality of life.
A lot of people are yet to be sold on their value! Some think their negatives
outweigh their positives, some think they are all hot air, and some even think
they represent an existential threat to humanity.
They’re actually quite easy to build
The most surprising thing we’ve learned about LLMs this year is that they’re actually
quite easy to build.'
- source_sentence: How many lines of Python code are generally needed to train a basic
version of a powerful system?
sentences:
- 'We already knew LLMs were spookily good at writing code. If you prompt them right,
it turns out they can build you a full interactive application using HTML, CSS
and JavaScript (and tools like React if you wire up some extra supporting build
mechanisms)—often in a single prompt.
Anthropic kicked this idea into high gear when they released Claude Artifacts,
a groundbreaking new feature that was initially slightly lost in the noise due
to being described half way through their announcement of the incredible Claude
3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive application and
then let you use it directly inside the Claude interface.
Here’s my Extract URLs app, entirely generated by Claude:'
- 'I’m still trying to figure out the best patterns for doing this for my own work.
Everyone knows that evals are important, but there remains a lack of great guidance
for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
riding a bicycle benchmark is a pale imitation of what a real eval suite should
look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform this
year.
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
was a huge disadvantage in terms of trying out new models.'
- 'Intuitively, one would expect that systems this powerful would take millions
of lines of complex code. Instead, it turns out a few hundred lines of Python
is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make these
things work, and the quantity and quality of the training data appears to be the
most important factor in how good the resulting model is.
If you can gather the right data, and afford to pay for the GPUs to train it,
you can build an LLM.'
- source_sentence: According to the context, what is one of the best applications
of large language models (LLMs)?
sentences:
- 'A lot of people are excited about AI agents—an infuriatingly vague term that
seems to be converging on “AI systems that can go away and act on your behalf”.
We’ve been talking about them all year, but I’ve seen few if any examples of them
running in production, despite lots of exciting prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve
gullibility without achieving AGI. So it may be quite a while before those agent
dreams can really start to come true!
Code may be the best application
Over the course of the year, it’s become increasingly clear that writing code
is one of the things LLMs are most capable of.'
- 'Law is not ethics. Is it OK to train models on people’s content without their
permission, when those models will then be used in ways that compete with those
people?
As the quality of results produced by AI models has increased over the year, these
questions have become even more pressing.
The impact on human society in terms of these models is already huge, if difficult
to objectively measure.
People have certainly lost work to them—anecdotally, I’ve seen this for copywriters,
artists and translators.
There are a great deal of untold stories here. I’m hoping 2024 sees significant
amounts of dedicated journalism on this topic.
My blog in 2023
Here’s a tag cloud for content I posted to my blog in 2023 (generated using Django
SQL Dashboard):'
- 'The two main categories I see are people who think AI agents are obviously things
that go and act on your behalf—the travel agent model—and people who think in
terms of LLMs that have been given access to tools which they can run in a loop
as part of solving a problem. The term “autonomy” is often thrown into the mix
too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they are in
Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually “coming
soon”.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9692441461309548
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9583333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9583333333333334
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mafzaal/finetuned_arctic_ft")
# Run inference
sentences = [
'According to the context, what is one of the best applications of large language models (LLMs)?',
'A lot of people are excited about AI agents—an infuriatingly vague term that seems to be converging on “AI systems that can go away and act on your behalf”. We’ve been talking about them all year, but I’ve seen few if any examples of them running in production, despite lots of exciting prototypes.\nI think this is because of gullibility.\nCan we solve this? Honestly, I’m beginning to suspect that you can’t fully solve gullibility without achieving AGI. So it may be quite a while before those agent dreams can really start to come true!\nCode may be the best application\nOver the course of the year, it’s become increasingly clear that writing code is one of the things LLMs are most capable of.',
'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9167 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9167 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9167 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9692** |
| cosine_mrr@10 | 0.9583 |
| cosine_map@100 | 0.9583 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.18 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.14 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What significant development in Artificial Intelligence occurred in 2023 according to Simon Willison’s weblog?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>How does Simon Willison describe the relationship between Large Language Models and the broader field of Artificial Intelligence?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>What are some challenges mentioned in building large language models like GPT-4?</code> | <code>Large Language Models<br>They’re actually quite easy to build<br>You can run LLMs on your own devices<br>Hobbyists can build their own fine-tuned models<br>We don’t yet know how to build GPT-4<br>Vibes Based Development<br>LLMs are really smart, and also really, really dumb<br>Gullibility is the biggest unsolved problem<br>Code may be the best application<br>The ethics of this space remain diabolically complex<br>My blog in 2023</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9638 |
| 2.0 | 32 | 0.9539 |
| 3.0 | 48 | 0.9539 |
| 3.125 | 50 | 0.9539 |
| 4.0 | 64 | 0.9692 |
| 5.0 | 80 | 0.9692 |
| 6.0 | 96 | 0.9692 |
| 6.25 | 100 | 0.9539 |
| 7.0 | 112 | 0.9692 |
| 8.0 | 128 | 0.9692 |
| 9.0 | 144 | 0.9692 |
| 9.375 | 150 | 0.9692 |
| 10.0 | 160 | 0.9692 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Pinkstack/Parm-2-CoT-14B-16k-o1-QwQ | Pinkstack | 2025-05-03T15:33:32Z | 30 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"phi3",
"reasoning",
"CoT",
"en",
"multilingual",
"dataset:amphora/QwQ-LongCoT-130K",
"arxiv:2412.08905",
"base_model:microsoft/phi-4",
"base_model:finetune:microsoft/phi-4",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-14T21:37:30Z | ---
license: mit
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
language:
- en
- multilingual
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- phi3
- reasoning
- CoT
inference:
parameters:
temperature: 0.3
widget:
- messages:
- role: user
content: How many R's in strawberry? Think step by step.
library_name: transformers
datasets:
- amphora/QwQ-LongCoT-130K
base_model:
- microsoft/phi-4
model-index:
- name: SuperThoughts-CoT-14B-16k-o1-QwQ
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 5.15
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 52.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 40.79
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 21.79
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.43
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Llmexplorer lmsys elo
type: elo-score
config: main
split: test
metrics:
- type: elo
value: 1203
name: elo
source:
url: https://llm.extractum.io/list/?benchmark=score_elo
name: LLMexplorer lmsys elo score
---
Renamed to parm-2
Please note, the low IFEVAL results is due to this model always reasoning, instruction following is limited, which caused it to have very low ifeval results, this should not matter for most use cases.
gguf/final version: https://huggingface.co/Pinkstack/PARM-V2-phi-4-16k-CoT-o1-gguf
This model can be merged with phi-4 based LLMs!
[Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905)
[superthoughts 14B openllm detailed results](https://huggingface.co/datasets/open-llm-leaderboard/Pinkstack__SuperThoughts-CoT-14B-16k-o1-QwQ-details)
Phi-4 that has been tuned to be more advanced at reasoning.
Unlike other Parm models we had to optimize our fine tuning process to ensure accuracy while still being able to release this model. **Training loss: 0.443800**
Beats qwen/qwq at MATH & MuSR & GPQA (MuSR being a reasoning benchmark)
Evaluation:






the model can use this prompt format: (modified phi-4 prompt, by adding a system prompt telling the model to reason before responding you'll get a similar if not better response)
```
{{ if .System }}<|system|>
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|im_end|>
{{ end }}<|assistant|>{{ .CoT }}<|CoT|>
{{ .Response }}<|FinalAnswer|><|im_end|>
```
It is recommended to use a system prompt like this one:
```
You are a helpful ai assistant. Make sure to put your finalanswer at the end.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Pinkstack__SuperThoughts-CoT-14B-16k-o1-QwQ-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 31.17|
|IFEval (0-Shot) | 5.15|
|BBH (3-Shot) | 52.85|
|MATH Lvl 5 (4-Shot)| 40.79|
|GPQA (0-shot) | 19.02|
|MuSR (0-shot) | 21.79|
|MMLU-PRO (5-shot) | 47.43|
# other leaderboard
According to https://llm.extractum.io/list/?benchmark=score_elo, this model is in the top 20 on their LMSys ELO score leaderboard.
# 🧀 Examples:
(q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.)
**example 1:**

**example 2:**

**example 3:**

**example 4:**


All generated locally
# 🧀 Information
- ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3 - 0.8!
- ⚠️ Due to the current prompt format, it may sometimes put <|FinalAnswer|> without providing a final answer at the end, you can ignore this or modify the prompt format.
This is our flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. Overall, the results are similar to both of them.
# Uploaded model
- **Developed by:** Pinkstack
- **License:** MIT
- **Finetuned from model :** microsoft/phi-4
This phi-4 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_M-GGUF | Triangle104 | 2025-05-03T15:32:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-03T15:00:56Z | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
---
Llama-3.1-Nemotron-Nano-8B-v1 is a large language model (LLM) which is a derivative of Meta Llama-3.1-8B-Instruct
(AKA the reference model). It is a reasoning model that is post trained
for reasoning, human chat preferences, and tasks, such as RAG and tool
calling.
Llama-3.1-Nemotron-Nano-8B-v1 is a model which offers a great
tradeoff between model accuracy and efficiency. It is created from Llama
3.1 8B Instruct and offers improvements in model accuracy. The model
fits on a single RTX GPU and can be used locally. The model supports a
context length of 128K.
This model underwent a multi-phase post-training process to enhance
both its reasoning and non-reasoning capabilities. This includes a
supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling
as well as multiple reinforcement learning (RL) stages using REINFORCE
(RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms
for both chat and instruction-following. The final model checkpoint is
obtained after merging the final SFT and Online RPO checkpoints.
Improved using Qwen.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_m.gguf -c 2048
```
|
sweet-sour-pig/thro | sweet-sour-pig | 2025-05-03T15:30:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T10:22:51Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: thro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thro
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:|
| 0.0 | 1.0 | 125 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0 | 2.0 | 250 | 0.0000 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
OOOss/ru-ner-model | OOOss | 2025-05-03T15:29:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-03T15:27:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hachipo/OpenCoder-8B-Base-MIFT-en_10000_2 | Hachipo | 2025-05-03T15:29:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:26:06Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/anthonyav-so100-lego-v3-caopgh0iqe | phospho-app | 2025-05-03T15:22:21Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-03T14:46:37Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [anthonyav/so100-lego-v3](https://huggingface.co/datasets/anthonyav/so100-lego-v3)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
mlx-community/Qwen3-8B-Base-bf16 | mlx-community | 2025-05-03T15:15:47Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-03T15:11:03Z | ---
license: apache-2.0
library_name: mlx
tags:
- mlx
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B-Base
---
# mlx-community/Qwen3-8B-Base-bf16
This model [mlx-community/Qwen3-8B-Base-bf16](https://huggingface.co/mlx-community/Qwen3-8B-Base-bf16) was
converted to MLX format from [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-8B-Base-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
trongg/lora-c46bb480-76cc-4d4e-9fec-77ef5c320ce4-1400 | trongg | 2025-05-03T15:15:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"region:us"
] | null | 2025-05-03T15:13:01Z | ---
base_model: unsloth/mistral-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Shengkun/DarwinLM-4.6B-Llama3.1-8B-Pruned-Masked | Shengkun | 2025-05-03T15:07:11Z | 157 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-13T06:26:56Z | ---
license: apache-2.0
---
This is DarwinLM pruned from Llama3.1-8B. The model is masked that the pruned weights are set as 0 while the remaining weights are the same as the original model.
The shape of all weights are the same as the original model.
```python
# To use the model
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Shengkun/DarwinLM-4.6B-Llama3.1-8B-Pruned-Masked")
```
**4.6B**
| Model | Method | Param. | SciQ | PIQA | WG | ArcE | ArcC | HS | LogiQA | BoolQ | MMLU | Avg |
|-----------------|------------------------|--------|------|------|------|------|------|------|--------|-------|------|------|
| **Llama-3.1-8B** | **Dense** | 8B | 96.3 | 81.2 | 74.3 | 81.4 | 58.2 | 81.7 | 31.1 | 84.0 | 65.2 | 72.8 |
| | **Uniform** | 4.5B | 29.1 | 53.6 | 51.7 | 26.0 | 23.6 | 27.1 | 25.5 | 62.1 | 25.7 | 36.1 |
| | **ZipLM** | 6B | 65.5 | 60.6 | 56.0 | 40.2 | 34.4 | 34.4 | 28.1 | 63.0 | 27.9 | 45.7 |
| | *DarwinLM (one-shot)* | 4.6B | 84.9 | 69.4 | 57.3 | 59.6 | 34.2 | 44.6 | 24.1 | 62.2 | 28.5 | 51.6 |
| | **OLMO (2.5T)** | 7B | 92.8 | 79.4 | 70.4 | 73.3 | 44.9 | 77.1 | 27.9 | 72.5 | 28.3 | 62.9 |
| | *DarwinLM (10.0B)* | 4.6B | 93.2 | 74.8 | 67.4 | 73.2 | 51.6 | 71.3 | 30.7 | 71.1 | 40.6 | 63.7 | |
jdchang/full-with-label-bs-1024-sg-2-step-486 | jdchang | 2025-05-03T14:59:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:59:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ridwanridus/ridus | ridwanridus | 2025-05-03T14:55:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T14:55:21Z | ---
license: apache-2.0
---
|
lisabdunlap/Llama-3.1-8B-Instruct-unsloth-bnb-4bit-r32-e10-lr0.0002-mixed-actors_reviews_markdown-new | lisabdunlap | 2025-05-03T14:50:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:48:24Z | ---
base_model: unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Benson87/min_model | Benson87 | 2025-05-03T14:48:50Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2025-05-03T14:34:54Z | # Min Model
Denne model er oprettet som test og placeholder. Du kan erstatte pytorch_model.bin med en trænet model.
## Struktur
- **pytorch_model.bin**: Dummy eller rigtig model
- **config.json**: Krævet for kompatibilitet
## Status
🚧 Under opbygning – erstattes senere med ægte weights
|
ASethi04/meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template | ASethi04 | 2025-05-03T14:47:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T10:52:44Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-hellaswag-first-lora-4-0.0001-same-prompt-template", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/rxgjo7gt)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phuongnam06052002/Notnow | phuongnam06052002 | 2025-05-03T14:44:54Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-03T14:44:54Z | ---
license: bigscience-openrail-m
---
|
Subsets and Splits