modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SvetKochnev/wan2.1_1.3b_female_anatomy | SvetKochnev | 2025-05-03T17:15:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T17:10:28Z | ---
license: apache-2.0
---
this lora helps create slightly more correct anatomical features for any woman.
this is probably one of the first lora trains for a 1.3b model. so there is no place to look for settings or better variations yet. this train is hastily done and may not give a perfect result. nevertheless, it improves generations.
I will continue experimenting with this. I will be glad to get any feedback, especially on your generation results.
If you wanna help, you can send me a link to any cloud file with short videos or photos on this topic. or maybe something else, it's you turn.
Trigger word is ntm. |
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF | Triangle104 | 2025-05-03T17:14:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"base_model:quantized:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:14:21Z | ---
base_model: huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated`](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_m.gguf -c 2048
```
|
Arpit-Bansal/Medical-Diagnosing-models | Arpit-Bansal | 2025-05-03T17:13:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T16:43:02Z | ---
license: apache-2.0
---
|
fats-fme/766a94ae-8bf0-4bf8-8045-479f98f1981c | fats-fme | 2025-05-03T17:12:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"license:gemma",
"region:us"
] | null | 2025-05-03T16:55:21Z | ---
library_name: peft
license: gemma
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 766a94ae-8bf0-4bf8-8045-479f98f1981c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d9bbedf3412069b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d9bbedf3412069b_train_data.json
type:
field_input: topic
field_instruction: prompt
field_output: cluster_description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/766a94ae-8bf0-4bf8-8045-479f98f1981c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3d9bbedf3412069b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a49c33b2-bc66-45d8-a6f5-a97fc1badc96
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a49c33b2-bc66-45d8-a6f5-a97fc1badc96
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 766a94ae-8bf0-4bf8-8045-479f98f1981c
This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 8.0848 |
| 0.4674 | 0.0903 | 100 | 0.3280 |
| 0.2307 | 0.1805 | 200 | 0.2412 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lamdo/bert-base-uncased-phrase-60kaddedphrasesfroms2orc-mlm-150000steps | lamdo | 2025-05-03T17:09:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-03T17:09:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_S-GGUF | Triangle104 | 2025-05-03T17:07:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"base_model:quantized:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:07:28Z | ---
base_model: huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated`](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q5_k_s.gguf -c 2048
```
|
PeterL123/GLM-Z1-9B-0414 | PeterL123 | 2025-05-03T17:05:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4",
"text-generation",
"conversational",
"zh",
"en",
"arxiv:2406.12793",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T17:04:11Z | ---
license: mit
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
# GLM-4-Z1-9B-0414
## Introduction
The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B).
**GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities.
**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks.
Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
## Performance
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-32B.png">
</p>
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/THUDM/GLM-4/refs/heads/main/resources/Bench-Z1-9B.png">
</p>
## Model Usage Guidelines
### I. Sampling Parameters
| Parameter | Recommended Value | Description |
| ------------ | ----------------- | -------------------------------------------- |
| temperature | **0.6** | Balances creativity and stability |
| top_p | **0.95** | Cumulative probability threshold for sampling|
| top_k | **40** | Filters out rare tokens while maintaining diversity |
| max_new_tokens | **30000** | Leaves enough tokens for thinking |
### II. Enforced Thinking
- Add \<think\>\n to the **first line**: Ensures the model thinks before responding
- When using `chat_template.jinja`, the prompt is automatically injected to enforce this behavior
### III. Dialogue History Trimming
- Retain only the **final user-visible reply**.
Hidden thinking content should **not** be saved to history to reduce interference—this is already implemented in `chat_template.jinja`
### IV. Handling Long Contexts (YaRN)
- When input length exceeds **8,192 tokens**, consider enabling YaRN (Rope Scaling)
- In supported frameworks, add the following snippet to `config.json`:
```json
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
```
- **Static YaRN** applies uniformly to all text. It may slightly degrade performance on short texts, so enable as needed.
## Inference Code
Make Sure Using `transforemrs>=4.51.3`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/GLM-4-Z1-9B-0414"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}]
inputs = tokenizer.apply_chat_template(
message,
return_tensors="pt",
add_generation_prompt=True,
return_dict=True,
).to(model.device)
generate_kwargs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"max_new_tokens": 4096,
"do_sample": False,
}
out = model.generate(**generate_kwargs)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
```
## Citations
If you find our work useful, please consider citing the following paper.
```
@misc{glm2024chatglm,
title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools},
author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang},
year={2024},
eprint={2406.12793},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
``` |
mveroe/Llama-3.2-1B-Instruct-reverse-safecoder-1.5-SecInsec-only-reverse-safecoder | mveroe | 2025-05-03T17:03:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T13:47:32Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Instruct-reverse-safecoder-1.5-SecInsec-only-reverse-safecoder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct-reverse-safecoder-1.5-SecInsec-only-reverse-safecoder
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
pictgencustomer/king-kong-19_705 | pictgencustomer | 2025-05-03T17:02:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T17:02:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: king-kong-19_michaeluffer_3
---
# King Kong 19_705
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `king-kong-19_michaeluffer_3` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgencustomer/king-kong-19_705', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Marco0/zog | Marco0 | 2025-05-03T17:02:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:59:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/no_pipeline_math_0.3k | mlfoundations-dev | 2025-05-03T17:01:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:21:50Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: no_pipeline_math_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_pipeline_math_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_math_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
TOMFORD79/Fly49 | TOMFORD79 | 2025-05-03T16:57:32Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T16:48:35Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Fly48 | TOMFORD79 | 2025-05-03T16:57:11Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T16:48:29Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Aaquib/gemma-3-1b-sft-alpaca | Aaquib | 2025-05-03T16:56:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:50:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
SFT'd version of google/gemma-3-1b-pt. Training performed solely on yahma/alpaca-cleaned. No further learning was performed.
## Model Details
Hyperparameters to replicate:
- lr=1e-5
- num_epochs=1
- train_batch_size=40
- test_batch_size=32
- max_seq_len=256
### Model Description
- **Finetuned from model:** [google/gemma-3-1b-pt] |
darshannere/Qwen2.5-3B-GSM8k_GRPO | darshannere | 2025-05-03T16:56:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T16:56:19Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** darshannere
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TIGER-Lab/VLM2Vec-Qwen2VL-7B | TIGER-Lab | 2025-05-03T16:56:29Z | 314 | 1 | transformers | [
"transformers",
"pytorch",
"qwen2_vl",
"image-text-to-text",
"conversational",
"en",
"dataset:TIGER-Lab/MMEB-train",
"arxiv:2410.05160",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-02-24T09:08:38Z | ---
license: apache-2.0
datasets:
- TIGER-Lab/MMEB-train
language:
- en
base_model:
- Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
---
A new checkpoint trained using [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) with an enhanced training setup (LoRA tuning, batch size of 2048, maximum sub-dataset size of 100k). This model has shown significantly improved performance on MMEB & Flickr30K compared to the previous models using Phi-3.5 and llava-v1.6-mistral as backbone.
This repo contains the code and data for [VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks](https://arxiv.org/abs/2410.05160). In this paper, we focus on building a unified multimodal embedding model suitable for a wide range of tasks. Our approach is based on transforming an existing, well-trained Vision-Language Model (VLM) into an embedding model.
## Github
- [Github](https://github.com/TIGER-AI-Lab/VLM2Vec)
## Data
Our model is being trained on MMEB-train and evaluated on MMEB-eval with contrastive learning. We only use in-batch negatives for training.
- Train data: https://huggingface.co/datasets/TIGER-Lab/MMEB-train
- Eval data: https://huggingface.co/datasets/TIGER-Lab/MMEB-eval
## Performance
This model outperforms the baselines and previous version of VLM2Vec by a large margin.
| Model | Classification | VQA | Retrieval | Grounding | IND | OOD | Overall |
|---------------------------------------|---------------|------|-----------|-----------|------|------|---------|
| Phi-3.5-V, Full-model fine-tuned (#crop=4) | 52.8 | 50.3 | 57.8 | 72.3 | 62.8 | 47.4 | 55.9 |
| Phi-3.5-V, LoRA | 54.8 | 54.9 | 62.3 | 79.5 | 66.5 | 52.0 | 60.1 |
| LLaVA-1.6, LoRA | 54.7 | 50.3 | 56.2 | 64.0 | 61.0 | 47.5 | 55.0 |
| LLaVA-1.6, LoRA | 61.2 | 49.9 | 67.4 | 86.1 | 67.5 | 57.1 | 62.9 |
| Qwen2-VL-2B, LoRA | 59.0 | 49.4 | 65.4 | 73.4 | 66.0 | 52.6 | 60.1 |
| **Qwen2-VL-7B, LoRA (this model)** | **62.6** | **57.8** | **69.9** | 81.7 | **72.2** | **57.8** | **65.8** |

## How to use VLM2Vec
(More details please refer to our Github repo, here is just a simple demo.)
First you can clone our github
```bash
git clone https://github.com/TIGER-AI-Lab/VLM2Vec.git
pip install -r requirements.txt
```
```python
from src.model import MMEBModel
from src.arguments import ModelArguments
from src.model_utils import load_processor, QWEN2_VL, vlm_image_tokens
from PIL import Image
import torch
model_args = ModelArguments(
model_name='Qwen/Qwen2-VL-7B-Instruct',
checkpoint_path='TIGER-Lab/VLM2Vec-Qwen2VL-7B',
pooling='last',
normalize=True,
model_backbone='qwen2_vl',
lora=True
)
processor = load_processor(model_args)
model = MMEBModel.load(model_args)
model = model.to('cuda', dtype=torch.bfloat16)
model.eval()
# Image + Text -> Text
inputs = processor(text=f'{vlm_image_tokens[QWEN2_VL]} Represent the given image with the following question: What is in the image',
images=Image.open('figures/example.jpg'),
return_tensors="pt")
inputs = {key: value.to('cuda') for key, value in inputs.items()}
inputs['pixel_values'] = inputs['pixel_values'].unsqueeze(0)
inputs['image_grid_thw'] = inputs['image_grid_thw'].unsqueeze(0)
qry_output = model(qry=inputs)["qry_reps"]
string = 'A cat and a dog'
inputs = processor(text=string,
images=None,
return_tensors="pt")
inputs = {key: value.to('cuda') for key, value in inputs.items()}
tgt_output = model(tgt=inputs)["tgt_reps"]
print(string, '=', model.compute_similarity(qry_output, tgt_output))
## A cat and a dog = tensor([[0.3301]], device='cuda:0', dtype=torch.bfloat16)
string = 'A cat and a tiger'
inputs = processor(text=string,
images=None,
return_tensors="pt")
inputs = {key: value.to('cuda') for key, value in inputs.items()}
tgt_output = model(tgt=inputs)["tgt_reps"]
print(string, '=', model.compute_similarity(qry_output, tgt_output))
## A cat and a tiger = tensor([[0.2891]], device='cuda:0', dtype=torch.bfloat16)
```
## Citation
```
@article{jiang2024vlm2vec,
title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks},
author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu},
journal={arXiv preprint arXiv:2410.05160},
year={2024}
}
|
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF | Triangle104 | 2025-05-03T16:56:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"base_model:quantized:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:56:01Z | ---
base_model: huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated`](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Q4_K_M-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-abliterated-q4_k_m.gguf -c 2048
```
|
malik-nazeba-minahil-malik-Viral-Video/18-EXCLUSIVE.malik.nazeba.minahil.malik.viral.video.Leaks.original | malik-nazeba-minahil-malik-Viral-Video | 2025-05-03T16:51:28Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T16:43:07Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/4du2u735?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Dive into the phenomenon of Malik Nazeba Minahil Malik's viral video. This comprehensive analysis examines its origins, the role of social media platforms, public reactions, and the ethical dimensions of viral content. Understand the dynamics of virality and how it shapes our digital interactions.
|
wATCH-Nighat-Naz-Viral-Video/wATCH.Nighat-Naz-Viral-video-link.original | wATCH-Nighat-Naz-Viral-Video | 2025-05-03T16:51:21Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T16:50:55Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
wATCH-Nighat-Naz-Viral-Video/wATCH.Nighat-Naz-Viral-video-Nighat-Naz.original | wATCH-Nighat-Naz-Viral-Video | 2025-05-03T16:50:22Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T16:50:07Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
jnjj/model_no_bias_my_13m-Q3_K_L-GGUF | jnjj | 2025-05-03T16:45:39Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:jnjj/model_no_bias_my_13m",
"base_model:quantized:jnjj/model_no_bias_my_13m",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T16:45:37Z | ---
base_model: jnjj/model_no_bias_my_13m
tags:
- llama-cpp
- gguf-my-repo
---
# jnjj/model_no_bias_my_13m-Q3_K_L-GGUF
This model was converted to GGUF format from [`jnjj/model_no_bias_my_13m`](https://huggingface.co/jnjj/model_no_bias_my_13m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jnjj/model_no_bias_my_13m) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jnjj/model_no_bias_my_13m-Q3_K_L-GGUF --hf-file model_no_bias_my_13m-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jnjj/model_no_bias_my_13m-Q3_K_L-GGUF --hf-file model_no_bias_my_13m-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jnjj/model_no_bias_my_13m-Q3_K_L-GGUF --hf-file model_no_bias_my_13m-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jnjj/model_no_bias_my_13m-Q3_K_L-GGUF --hf-file model_no_bias_my_13m-q3_k_l.gguf -c 2048
```
|
alpcaferoglu/Qwen2.5-Coder-3B-Instruct-bnb-4bit_bd_cs_t2s-t2sws_r64_a64_e2_bs2_gas4_sftreason | alpcaferoglu | 2025-05-03T16:45:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T11:35:22Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apriasmoro/e83b3094-8648-4f61-a731-80f986134ecb | apriasmoro | 2025-05-03T16:42:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-05-03T16:37:14Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e83b3094-8648-4f61-a731-80f986134ecb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f8164dbb54597854_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f8164dbb54597854_train_data.json
type:
field_input: description
field_instruction: article
field_output: reference
field_system: None
format: None
no_input_format: None
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: apriasmoro/e83b3094-8648-4f61-a731-80f986134ecb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f8164dbb54597854_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e83b3094-8648-4f61-a731-80f986134ecb
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1906 | 0.0004 | 1 | 3.0166 |
| 3.1572 | 0.0012 | 3 | 2.9884 |
| 2.8477 | 0.0024 | 6 | 2.6561 |
| 2.3275 | 0.0036 | 9 | 2.1221 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF | mradermacher | 2025-05-03T16:42:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT1-Gen13-gemma-2-9B",
"base_model:quantized:zelk12/MT1-Gen13-gemma-2-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-03T10:32:34Z | ---
base_model: zelk12/MT1-Gen13-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/zelk12/MT1-Gen13-gemma-2-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT1-Gen13-gemma-2-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
strangerzonehf/Flux-Midjourney-Studio-LoRA | strangerzonehf | 2025-05-03T16:42:00Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T10:41:07Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'mj x1, a close-up shot of a man with short platinum blonde hair and olive skin, dressed in a black crew neck t-shirt. He has a silver nose ring and a small teardrop tattoo under his left eye. The studio lighting casts a soft shadow under his chin, giving dimension to his jawline. The background is a gradient from pale blue to white.'
output:
url: images/1111.png
- text: 'mj x1, A close-up of a young man with dark brown eyes and wavy black hair. He is wearing a dark green trench coat with a high collar and a light brown scarf around his neck. The backdrop is a cloudy gray, adding an air of mystery to the scene.'
output:
url: images/2222.png
- text: 'mj x1, a close-up shot of a young mans face is adorned with a beige baseball cap adorned with red lettering. The mans eyes are a piercing blue, and he is wearing a pink t-shirt. His hair is dark brown, adding a touch of texture to his face. The backdrop is a vibrant shade of blue, creating a stark contrast to the mans head and the cap.'
output:
url: images/3333.png
- text: 'mj x1, cinematic shot, a medium-sized man stands in the foreground of the frame. He is dressed in a black jacket, black pants, and a black backpack. His hair is short and dark, and hes looking off to the side. The backdrop is a stark white wall, and the ceiling is adorned with metal pipes and beams. To the left of the man, a man with dark hair and glasses is looking off into the distance.'
output:
url: images/4444.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mj x1
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---

# Flux-Midjourney-Studio-LoRA
<Gallery />
# Model description for Flux-Midjourney-Studio-LoRA
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 30 & 3900 |
| Epoch | 25 | Save Every N Epochs | 1 [90] |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 27 [ Midjourney Generated Synthetic Image ]
## Best Dimensions & Inference
| **Dimensions** | **Aspect Ratio** | **Recommendation** |
|-----------------|------------------|---------------------------|
| 1280 x 832 | 3:2 | Best |
| 1024 x 1024 | 1:1 | Default |
### Inference Range
- **Recommended Inference Steps:** 30-35
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Flux-Midjourney-Studio-LoRA"
trigger_word = "mj x1"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `mj x1` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/strangerzonehf/Flux-Midjourney-Studio-LoRA/tree/main) them in the Files & versions tab. |
LarryAIDraw/Ishtar | LarryAIDraw | 2025-05-03T16:41:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-03T16:10:37Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/645130?modelVersionId=722582 |
LarryAIDraw/Ishtar_fire_emblem | LarryAIDraw | 2025-05-03T16:41:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-03T16:10:12Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/645631/ishtar-fire-emblem-genealogy-of-the-holy-war |
LarryAIDraw/ChamEyjafjallaPonyXL | LarryAIDraw | 2025-05-03T16:41:19Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-03T16:09:49Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/426460/eyjafjalla-3-outfits-or-arknights-or-pony-xl |
Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF | Triangle104 | 2025-05-03T16:41:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/QWQ-32B-Dawnwhisper",
"base_model:quantized:DoppelReflEx/QWQ-32B-Dawnwhisper",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T16:39:44Z | ---
base_model: DoppelReflEx/QWQ-32B-Dawnwhisper
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF
This model was converted to GGUF format from [`DoppelReflEx/QWQ-32B-Dawnwhisper`](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q4_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q4_k_s.gguf -c 2048
```
|
pictgencustomer/coneyisland2-14_425 | pictgencustomer | 2025-05-03T16:38:24Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T16:38:21Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: coneyisland2-14_michaeluffer_6
---
# Coneyisland2 14_425
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `coneyisland2-14_michaeluffer_6` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgencustomer/coneyisland2-14_425', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Hamzah-Asadullah/Failed-FPFT-0.6B-GGUF | Hamzah-Asadullah | 2025-05-03T16:36:13Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"conversational",
"text2text-generation",
"en",
"base_model:Hamzah-Asadullah/Failed-FPFT-0.6B",
"base_model:quantized:Hamzah-Asadullah/Failed-FPFT-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-03T16:26:56Z | ---
license: apache-2.0
language:
- en
base_model:
- Hamzah-Asadullah/Failed-FPFT-0.6B
pipeline_tag: text2text-generation
library_name: transformers
tags:
- conversational
---
> [!NOTE]
> Consider supporting me [here](https://ko-fi.com/hamzahasadullah) 🎉
> Try out my assistant for free [here](https://xetute.github.io/)
**Failed Full Parameter Fine Tune**
A fine-tuned version of the base-model linked. It's performance is better than the base model; though it is not as high as I expected, hence this finetuned is failed.
This model can be used for enhanced instruction-following, STEM-based questions, storywriting and roleplay, and so on. You can find the data used to train this model under `data.json`. For finetuning, the Qwen3 chat template was used.
**Happy chatting.**
<div style="display: flex; flex-direction: column; justify-content: center; align-items: left; font-size: 1rem; padding: 20px;">
<div style="display: flex; flex-direction: row; align-items: center; margin: 10px; margin-left: 0; padding: 0;">
<img src="https://xetute.github.io/favicon.ico" style="margin: 0; border-radius: 50%; height: 2rem;"/>
<h2 style="margin: 0; margin-left: 10px;">XeTute Technologies</h2>
</div>
<div style="display: flex; flex-direction: row; gap: 5px; margin: 0; max-width: 500px;">
XeTute Technologies is an unofficial Pakistani organisation created by <a href="https://huggingface.co/Hamzah-Asadullah">Hamzah Asadullah.</a>
</div>
<h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Links</h2>
<div style="display: flex; flex-direction: row; word-break: none; gap: 5px;">
<a href="https://huggingface.co/XeTute">HuggingFace</a>
<a href="https://github.com/XeTute">GitHub</a>
</div>
<div style="display: flex; flex-direction: row; word-break: none; gap: 5px;">
<a href="https://ko-fi.com/hamzahasadullah">Buy me a Coffee</a>
<a href="https://xetute.github.io">Apex Webpage</a>
</div>
<h2 style="margin: 5px; margin-top: 20px; margin-left: 0;">Pakistan</h2>
Pakistan is a country in South-Asia known for its rich culture despite the British, its stunning landscape, and PAF (Pakistan Armed Forces), its military. Long live the Islamic Republic of Pakistan.<br>
<img src="https://upload.wikimedia.org/wikipedia/commons/3/32/Flag_of_Pakistan.svg" style="width: 85%; max-width: 512px; border-radius: 25px;"/>
</div> |
Photchara/stock-sentiment-analysis_V1 | Photchara | 2025-05-03T16:31:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"base_model:ProsusAI/finbert",
"base_model:finetune:ProsusAI/finbert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T16:03:33Z | ---
library_name: transformers
base_model:
- ProsusAI/finbert
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
MODEL_NAME = "ProsusAI/finbert"
MAX_LENGTH = 512
BATCH_SIZE = 16
EPOCHS = 3
LEARNING_RATE = 2e-5
'eval_loss': 0.3165287375450134, 'eval_accuracy': 0.8761904761904762, 'eval_f1': 0.8807109047522269
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vlassner01/t5_cnn_model_base_v2 | vlassner01 | 2025-05-03T16:27:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-03T16:25:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
{'eval_loss': 0.6198398470878601,
'eval_runtime': 48.3505,
'eval_samples_per_second': 20.682,
'eval_steps_per_second': 2.585,
'epoch': 4.949333333333334}
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TOMFORD79/Fly47 | TOMFORD79 | 2025-05-03T16:25:54Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T16:09:26Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Fly46 | TOMFORD79 | 2025-05-03T16:25:43Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T16:09:22Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TOMFORD79/Fly45 | TOMFORD79 | 2025-05-03T16:25:34Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-03T16:09:16Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
nawazadroit/final3801st | nawazadroit | 2025-05-03T16:22:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T16:21:42Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nawazadroit
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Eehan/pythia-1b-deduped-tldr-gpm-2dim-temp-6025-beta-0.04 | Eehan | 2025-05-03T16:19:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T16:17:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ai-and-society/llama-3.1-8B-Instruct-SQINT8 | ai-and-society | 2025-05-03T16:19:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2025-05-03T16:16:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/no_pipeline_science_1k | mlfoundations-dev | 2025-05-03T16:18:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:42:13Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: no_pipeline_science_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_pipeline_science_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_science_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 6
- total_train_batch_size: 96
- total_eval_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlfoundations-dev/no_pipeline_science_0.3k | mlfoundations-dev | 2025-05-03T16:18:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:42:15Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: no_pipeline_science_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_pipeline_science_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_science_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
BlcaCola/YI-AI-8B-Chinese-IT-V2-GGUF | BlcaCola | 2025-05-03T16:13:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"text-generation",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-03T13:41:12Z | ---
tags:
- text-generation-inference
- transformers
- gguf
license: apache-2.0
language:
- zh
pipeline_tag: text-generation
---
# 模型信息/Model Information
<img src="YI.png" alt="YI-AI形象设定/Image setting of YI-AI" style="width: 400px; height: auto;">
YI-AI形象设定/Image setting of YI-AI
## YI-AI-Chinese-8B-V2 v.0.8.4版本正式发布公测!
## YI-AI-Chinese-8B-V2 v.0.8.4 is officially released for public beta!
### ZH
首先感谢大家对于YI-AI的反馈和支持!YI-AI在首发的一周内就已经突破了1K+的下载量!
怀着无比兴奋的心情,现在正式发布第二代“YI-AI-V2”!
本次发布的大模型为8.03B参数,Llama架构,使用约27W+优质中文语料训练集进行蒸馏,模型层数32个Transformer层,隐藏层数4096层,最大上下文长度8K的Token。
中文理解能力可以达到其他模型32B的能力(仅代表本人实验数据)。 相比于V1,V2版本在中文对话下拥有更强的遵循指令能力和中文创作能力。
本模型使用了基于人类反馈的强化学习(RLHF),使模型预测与人类偏好保持一致、机器反馈强化学习(RLMF),增强数学推理、强化学习执行反馈(RLEF),提高编码能力。
### EN
First of all, thank you for your feedback and support for YI-AI! YI-AI has already exceeded 1K+ downloads within a week of its debut!
With great excitement, we are now officially releasing the second generation of “YI-AI-V2”!
The large model released this time is 8.03B parameters, Llama architecture, using about 27W+ high-quality Chinese corpus training set for distillation,
32 Transformer layers in the model, 4096 hidden layers, and the maximum context length of 8K tokens.
Chinese comprehension ability can reach the ability of other models of 32B (only represents my experimental data).
Compared with V1, V2 version has stronger ability to follow instructions and Chinese creation under Chinese dialog.
This model uses Reinforcement Learning from Human Feedback(RLHF) to align model predictions with human preferences,
Reinforcement Learning from Machine Feedback (RLMF) to enhance mathematical reasoning, and Reinforcement Learning with Environmental Feedback (RLEF) to improve coding capabilities.
# 数据预筛选/Data Preprocessing
### ZH
## 以下是应用于训练数据的主要数据清理和过滤方法:
- **儿童性虐待内容过滤:**在数据准备流程的多个阶段,我们都采用了严格的儿童性虐待内容 (CSAM) 过滤机制,以确保排除有害和非法内容。
- **敏感数据过滤:**为了确保模型安全可靠,我们从训练集中滤除特定的个人信息和其他敏感数据。
- **其他方法:**根据内容质量和安全性进行过滤。
### EN
## Here are the key data cleaning and filtering methods applied to the training data:
- **CSAM Filtering:** Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content.
- **Sensitive Data Filtering:** As part of making models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets.
- **Additional methods:** Filtering based on content quality and safety.
# 推荐环境/Recommended environment:
### ZH
- Python 3.11
- 一张拥有 8GB 及以上显存的显卡(int4 量化最低要求为 4G 显存,CPU 推理需要 16G 及以上内存)
- 推荐使用Ollama。
- 在电脑端,使用 R7 7745HX 处理器、RAM 32G、RTX 4070移动版显卡(8G内存) ,在CUDA模式下性能可达到15-20个Token每秒。
- 移动端建议使用API调用的方式使用。
### EN
- Python 3.11
- A graphics card with 8GB or more video memory (int4 quantization requires at least 4G video memory, and CPU inference requires 16G or more memory)
- Ollama is recommended.
- On the computer side, using an R7 7745HX processor, 32G RAM, and an RTX 4070 mobile graphics card (8G memory), the performance in CUDA mode can reach 15-20 tokens per second.
- Mobile is recommended for use with API calls.
## 通过Ollama部署/How to use from Ollama
```bash
ollama run hf.co/BlcaCola/YI-AI-8B-Chinese-IT-V2-GGUF:Q8_0
# 通过修改字段使用不同的量化模型,例如 “YI-AI-4B-Chinese-IT-V1-GGUF:Q8_0” 使用8-bit量化。
# 其他可选:"BlcaCola/YI-AI-8B-Chinese-IT-V2-GGUF:F16","BlcaCola/YI-AI-8B-Chinese-IT-V2-GGUF:Q6_K"
# Use different quantization models by modifying the field, for example, "YI-AI-8B-Chinese-IT-V2-GGUF:Q8_0" uses 8-bit quantization.
# Other options: "BlcaCola/YI-AI-8B-Chinese-IT-V2-GGUF:F16", "BlcaCola/YI-AI-8B-Chinese-IT-V2-GGUF:Q6_K".
``` |
mutaibamohsin845/cv_bot | mutaibamohsin845 | 2025-05-03T16:13:08Z | 0 | 0 | null | [
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:finetune:deepseek-ai/DeepSeek-V3-0324",
"license:mit",
"region:us"
] | null | 2025-05-03T16:11:55Z | ---
license: mit
base_model:
- deepseek-ai/DeepSeek-V3-0324
--- |
Miluzi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tame_twitchy_toad | Miluzi | 2025-05-03T16:11:59Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am tame twitchy toad",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T15:44:30Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tame_twitchy_toad
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am tame twitchy toad
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tame_twitchy_toad
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Miluzi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tame_twitchy_toad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hardik9719/videomae-base-finetuned-ucf-timesfomer-5-5-25-610videos | hardik9719 | 2025-05-03T16:04:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"timesformer",
"video-classification",
"generated_from_trainer",
"base_model:facebook/timesformer-base-finetuned-k400",
"base_model:finetune:facebook/timesformer-base-finetuned-k400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-03T05:26:46Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/timesformer-base-finetuned-k400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf-timesfomer-5-5-25-610videos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf-timesfomer-5-5-25-610videos
This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1723
- Accuracy: 0.5749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 276
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.581 | 0.3370 | 93 | 1.6027 | 0.4022 |
| 1.4297 | 1.3370 | 186 | 1.2423 | 0.5869 |
| 0.8781 | 2.3261 | 276 | 1.0586 | 0.6129 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.3.2
- Tokenizers 0.21.1
|
luhaoran/Qwen2.5-7B-Stage2-040 | luhaoran | 2025-05-03T16:01:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:54:34Z | ---
library_name: transformers
model_name: Qwen2.5-7B-Stage2-040
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Stage2-040
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luhaoran/Qwen2.5-7B-Stage2-040", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/haoranlu0730-ustc/huggingface/runs/nybge3ld)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template | ASethi04 | 2025-05-03T16:00:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:19:39Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-third-lora-4-0.0001-same-prompt-template", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/gp8nvulh)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb | Romain-XV | 2025-05-03T15:59:04Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:09:42Z | ---
base_model: unsloth/Phi-3-mini-4k-instruct
library_name: transformers
model_name: 8f3edba3-d660-4f05-bc84-9befdb0b2deb
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for 8f3edba3-d660-4f05-bc84-9befdb0b2deb
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Romain-XV/8f3edba3-d660-4f05-bc84-9befdb0b2deb", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/8bqxlpyz)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bihungba1101/json_segmenting_sft_warmup_qwen | bihungba1101 | 2025-05-03T15:57:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:57:03Z | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bihungba1101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BlackMoney/Football | BlackMoney | 2025-05-03T15:56:21Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-05-03T15:55:40Z | ---
license: mit
---
import random
# Player Data (Lamine Yamal)
player = {
"name": "Lamine Yamal",
"age": 17,
"goals": 0,
"assists": 0,
"ballon_dor_ranking": 1,
"club_trophies": 0,
"individual_awards": 0,
"social_media_following": 40, # in millions
"contract_years_left": 5, # Initial contract duration
"national_team": True, # Is he playing for the national team
"team": "Barcelona", # Initial team
"is_injured": False,
"performance_decrease_rate": 0.05 # Performance declines by 5% per year after age 30
}
# Teams and Squad Data (Based on May 2025)
teams = {
"Barcelona": {
"squad": ["Lamine Yamal", "Pedri", "Robert Lewandowski", "Gavi", "Jules Kounde", "Ter Stegen"],
"coach": "Julian Nagelsmann",
"rivals": ["Real Madrid", "Atletico Madrid", "PSG"]
},
"Real Madrid": {
"squad": ["Karim Benzema", "Vinicius Jr.", "Eduardo Camavinga", "Toni Kroos", "Thibaut Courtois", "Eder Militao"],
"coach": "Carlo Ancelotti",
"rivals": ["Barcelona", "Atletico Madrid", "Bayern Munich"]
},
"PSG": {
"squad": ["Kylian Mbappé", "Lionel Messi", "Neymar", "Achraf Hakimi", "Gianluigi Donnarumma", "Marco Verratti"],
"coach": "Christophe Galtier",
"rivals": ["Marseille", "Bayern Munich", "Real Madrid"]
},
# Add more teams here as needed, with actual 2025 squads
}
# Function to simulate a season
def simulate_season(player, teams):
# Player performance decline with age after 30
if player["age"] > 30:
performance_decline = 1 - player["performance_decrease_rate"]
else:
performance_decline = 1
# Simulate player's season performance
goals_scored = random.randint(20, 40) * performance_decline # Goals scored based on age
assists_made = random.randint(10, 20) * performance_decline # Assists
player["goals"] += goals_scored
player["assists"] += assists_made
# Simulate trophies won (1 or 0)
won_trophy = random.choice([1, 0])
player["club_trophies"] += won_trophy
individual_awards = random.randint(0, 3)
player["individual_awards"] += individual_awards
# Update Ballon d'Or ranking based on performance
if random.random() < 0.2: # Simulating chance to win Ballon d'Or (20%)
player["ballon_dor_ranking"] = 1
# Age progression and contract
player["age"] += 1
player["contract_years_left"] -= 1
if player["contract_years_left"] <= 0:
# Player might transfer or re-sign
player["team"] = random.choice(list(teams.keys())) # Random transfer
player["contract_years_left"] = 5 # New contract for 5 years
# Simulate injuries (5% chance per season)
if random.random() < 0.05:
player["is_injured"] = True
print(f"{player['name']} is injured this season.")
# Team performance (simulate team trophies)
for team in teams.values():
if player["team"] in team["rivals"]:
# If player’s team faces a rival, simulate rivalry game tension
rivalry_game_result = random.choice(["Win", "Loss", "Draw"])
print(f"Rivalry match against {team['team']} ended in {rivalry_game_result}.")
# Player social media influence grows with success
player["social_media_following"] += random.randint(5, 10)
# Simulate several seasons
def career_simulation(player, seasons=10):
for season in range(1, seasons + 1):
print(f"Season {season}: {player['name']} at Age {player['age']} playing for {player['team']}")
simulate_season(player, teams)
print(f"Performance this season: {player['goals']} goals, {player['assists']} assists")
print(f"Trophies won: {player['club_trophies']}, Ballon d'Or Ranking: {player['ballon_dor_ranking']}")
print(f"Social media following: {player['social_media_following']} million followers\n")
if player['is_injured']:
print(f"{player['name']} is recovering from an injury.")
player['is_injured'] = False
# Start career simulation
career_simulation(player)
|
Chriscapa/cap-lora | Chriscapa | 2025-05-03T15:53:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T15:38:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Cap
---
# Cap Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Cap` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Cap",
"lora_weights": "https://huggingface.co/Chriscapa/cap-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Chriscapa/cap-lora', weight_name='lora.safetensors')
image = pipeline('Cap').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Chriscapa/cap-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
piotr-ai/polanka_4b_v0.1_qwen3_250502_gguf | piotr-ai | 2025-05-03T15:52:25Z | 0 | 0 | null | [
"gguf",
"language",
"text-generation",
"pl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-02T07:35:13Z | ---
license: apache-2.0
language:
- pl
- en
pipeline_tag: text-generation
tags:
- language
---
temp: 0.6
top-k: 40
**example 1:**
```
<|im_start|>system
Poniżej znajduje się dialog, w którym użytkownik rozmawia z asystentką o imieniu Polanka.
Polanka jest pomocna i uprzejma.<|im_end|>
> co to jest psychologia?
Psychologia to dziedzina nauki, która zajmuje się badaniem ludzkiego umysłu, zachowań, emocji, motywacji i procesów poznawczych. Psychologia bada, jak ludzie myślą, uczą się, doświadczają i reagują na swoje otoczenie. Jest to dyscyplina, która łączy w sobie elementy z różnych dziedzin, takich jak:
1. **Biologia**: Badanie biologicznych podstaw ludzkiego zachowania i poznawania.
2. **Neurobiologia**: Badanie struktury i funkcji mózgu oraz ich wpływu na zachowanie.
3. **Psychologia behawioralna**: Badanie zachowań poprzez obserwację i interwencję.
4. **Psychologia poznawcza**: Badanie procesów myślowych, takich jak pamięć, uczenie się, rozwiązywanie problemów.
5. **Psychologia społeczna**: Badanie wpływu społeczeństwa na zachowanie i postrzeganie siebie.
6. **Psychologia rozwoju**: Badanie rozwoju osobowości, poznawczego i emocjonalnego w czasie.
7. **Psychologia kliniczna**: Badanie i leczenie zaburzeń psychicznych oraz problemów emocjonalnych.
8. **Psychologia kulturowa**: Badanie wpływu kultury na ludzkie zachowanie i postrzeganie siebie.
9. **Psychologia etyczna**: Badanie etycznych aspektów psychologii i jej zastosowań.
Psychologia ma kluczowe znaczenie w rozumieniu ludzkich zachowań, motywacji i emocji, a także w rozwoju terapii i leczenia zaburzeń psychicznych.
```
**example 2:**
wyjaśnij krok po kroku ile to jest 12 / 3 + 2
```
Oczywiście, wyjaśnię ci to krok po kroku:
1. Rozpocznijmy od pierwszego działania: 12 / 3.
- 12 podzielone przez 3 to 4, ponieważ 3 razy 4 to 12.
2. Teraz dodajemy do wyniku 4 liczbę 2.
- 4 + 2 = 6.
Więc 12 / 3 + 2 = 6.
```
**example 3:**
napisz funkcję w pythonie, która mnoży liczbę razy 3
````
Oto funkcja w Pythonie, która mnoży liczbę przez 3:
```python
def mnozenie_liczby_3(liczba):
"""
Funkcja mnoży liczbę przez 3.
Parameters:
liczba (int): Liczba, którą chcemy pomnożyć.
Returns:
int: Wynik mnożenia liczby przez 3.
"""
wynik = liczba * 3
return wynik
# Przykłady użycia:
liczba = 5
wynik = mnozenie_liczby_3(liczba)
print(f"Wynik mnożenia 5 przez 3 to: {wynik}")
liczba = 0
wynik = mnozenie_liczby_3(liczba)
print(f"Wynik mnożenia 0 przez 3 to: {wynik}")
```
W tej funkcji `liczba` to parametr, który otrzymujemy podczas wywołania funkcji. Funkcja zwraca wynik mnożenia liczby przez 3. W przykładach użycia możemy przypisać do zmiennej liczbę, którą chcemy pomnożyć, a następnie wywołać funkcję z tą liczbą jako argumentem i wyświetlić wynik.
````
**example 4:**
przetłumacz na język angielski: "Liczba mórz na Oceanie Spokojnym w stosunku do jego powierzchni nie jest zbyt wielka, ponieważ większa część jego obszaru to po prostu ocean, bez miejsc, w których można wyznaczyć morza."
```
"Number of seas in the Pacific Ocean relative to its surface isn't too great because the majority of its area is just ocean, with no places where you can identify a sea."
``` |
DuongTrongChi/vinallama-2.7b-chat-sft-v1 | DuongTrongChi | 2025-05-03T15:47:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:44:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ping98k/qwen3-8b-recall-writer-1e | ping98k | 2025-05-03T15:47:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:46:51Z | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ping98k
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF | Triangle104 | 2025-05-03T15:43:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/QWQ-32B-Dawnwhisper",
"base_model:quantized:DoppelReflEx/QWQ-32B-Dawnwhisper",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T15:42:08Z | ---
base_model: DoppelReflEx/QWQ-32B-Dawnwhisper
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF
This model was converted to GGUF format from [`DoppelReflEx/QWQ-32B-Dawnwhisper`](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/QWQ-32B-Dawnwhisper) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QWQ-32B-Dawnwhisper-Q3_K_S-GGUF --hf-file qwq-32b-dawnwhisper-q3_k_s.gguf -c 2048
```
|
franzexplorer77/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout | franzexplorer77 | 2025-05-03T15:43:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am alert armored trout",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:54:42Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am alert armored trout
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="franzexplorer77/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-alert_armored_trout", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
oferk/ppo-LunarLander-v2 | oferk | 2025-05-03T15:41:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-03T15:38:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.46 +/- 22.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
stpete2/Qwen2.5-0.5B-gsm8k-sft | stpete2 | 2025-05-03T15:40:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"dataset:stpete2/openai-gsm8k-part",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T15:39:52Z | ---
base_model: Qwen/Qwen2.5-0.5B
datasets: stpete2/openai-gsm8k-part
library_name: transformers
tags:
- generated_from_trainer
- open-r1
licence: license
---
# Model Card for None
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the [stpete2/openai-gsm8k-part](https://huggingface.co/datasets/stpete2/openai-gsm8k-part) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/stpeteishii/huggingface/runs/yokzcfy9)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/gemma-2-9B-it-blend-GGUF | mradermacher | 2025-05-03T15:34:26Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:spacematt/gemma-2-9B-it-blend",
"base_model:quantized:spacematt/gemma-2-9B-it-blend",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T21:18:06Z | ---
base_model: spacematt/gemma-2-9B-it-blend
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/spacematt/gemma-2-9B-it-blend
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-2-9B-it-blend-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-blend-GGUF/resolve/main/gemma-2-9B-it-blend.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mafzaal/finetuned_arctic_ft | mafzaal | 2025-05-03T15:33:46Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-03T15:33:03Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: Which multi-modal models were released by significant vendors in
2024, and in which months did they appear?
sentences:
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'In 2024, almost every significant model vendor released multi-modal models. We
saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images,
audio and video), then September brought Qwen2-VL and Mistral’s Pixtral 12B and
Meta’s Llama 3.2 11B and 90B vision models. We got audio input and output from
OpenAI in October, then November saw SmolVLM from Hugging Face and December saw
image and video models from Amazon Nova.
In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
It now has plugins for a whole collection of different vision models.'
- 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
available from its launch in June. This was a momentus change, because for the
previous year free users had mostly been restricted to GPT-3.5 level models, meaning
new users got a very inaccurate mental model of what a capable LLM could actually
do.
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
Pro. This $200/month subscription service is the only way to access their most
capable model, o1 Pro.
Since the trick behind the o1 series (and the future models it will undoubtedly
inspire) is to expend more compute time to get better results, I don’t think those
days of free access to the best available models are likely to return.'
- source_sentence: How is a prompt without evals, models, and UX compared in the given
context?
sentences:
- 'The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the infrastructure
that is imagined to be necessary for these models in the future.
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
dollars rolling out new datacenters, with a very material impact on the electricity
grid and the environment. There’s even talk of spinning up new nuclear power stations,
but those can take decades.
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
crash in LLM prices might hint that it’s not. But would you want to be the big
tech executive that argued NOT to build out this infrastructure only to be proven
wrong in a few years’ time?'
- 'When @v0 first came out we were paranoid about protecting the prompt with all
kinds of pre and post processing complexity.
We completely pivoted to let it rip. A prompt without the evals, models, and especially
UX is like getting a broken ASML machine without a manual'
- 'The boring yet crucial secret behind good system prompts is test-driven development.
You don’t write down a system prompt and find ways to test it. You write down
tests and find a system prompt that passes them.
It’s become abundantly clear over the course of 2024 that writing good automated
evals for LLM-powered systems is the skill that’s most needed to build useful
applications on top of these models. If you have a strong eval suite you can adopt
new models faster, iterate better and build more reliable and useful product features
than your competition.
Vercel’s Malte Ubl:'
- source_sentence: How did the construction of railways in the 1800s impact the environment?
sentences:
- 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
models currently available, significantly bigger than the largest of Meta’s Llama
series, Llama 3.1 405B.
Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot
Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models.
This is by far the highest ranking openly licensed model.
The really impressive thing about DeepSeek v3 is the training cost. The model
was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
that benchmarks slightly worse.'
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'So far, I think they’re a net positive. I’ve used them on a personal level to
improve my productivity (and entertain myself) in all sorts of different ways.
I think people who learn how to use them effectively can gain a significant boost
to their quality of life.
A lot of people are yet to be sold on their value! Some think their negatives
outweigh their positives, some think they are all hot air, and some even think
they represent an existential threat to humanity.
They’re actually quite easy to build
The most surprising thing we’ve learned about LLMs this year is that they’re actually
quite easy to build.'
- source_sentence: How many lines of Python code are generally needed to train a basic
version of a powerful system?
sentences:
- 'We already knew LLMs were spookily good at writing code. If you prompt them right,
it turns out they can build you a full interactive application using HTML, CSS
and JavaScript (and tools like React if you wire up some extra supporting build
mechanisms)—often in a single prompt.
Anthropic kicked this idea into high gear when they released Claude Artifacts,
a groundbreaking new feature that was initially slightly lost in the noise due
to being described half way through their announcement of the incredible Claude
3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive application and
then let you use it directly inside the Claude interface.
Here’s my Extract URLs app, entirely generated by Claude:'
- 'I’m still trying to figure out the best patterns for doing this for my own work.
Everyone knows that evals are important, but there remains a lack of great guidance
for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
riding a bicycle benchmark is a pale imitation of what a real eval suite should
look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform this
year.
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
was a huge disadvantage in terms of trying out new models.'
- 'Intuitively, one would expect that systems this powerful would take millions
of lines of complex code. Instead, it turns out a few hundred lines of Python
is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make these
things work, and the quantity and quality of the training data appears to be the
most important factor in how good the resulting model is.
If you can gather the right data, and afford to pay for the GPUs to train it,
you can build an LLM.'
- source_sentence: According to the context, what is one of the best applications
of large language models (LLMs)?
sentences:
- 'A lot of people are excited about AI agents—an infuriatingly vague term that
seems to be converging on “AI systems that can go away and act on your behalf”.
We’ve been talking about them all year, but I’ve seen few if any examples of them
running in production, despite lots of exciting prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve
gullibility without achieving AGI. So it may be quite a while before those agent
dreams can really start to come true!
Code may be the best application
Over the course of the year, it’s become increasingly clear that writing code
is one of the things LLMs are most capable of.'
- 'Law is not ethics. Is it OK to train models on people’s content without their
permission, when those models will then be used in ways that compete with those
people?
As the quality of results produced by AI models has increased over the year, these
questions have become even more pressing.
The impact on human society in terms of these models is already huge, if difficult
to objectively measure.
People have certainly lost work to them—anecdotally, I’ve seen this for copywriters,
artists and translators.
There are a great deal of untold stories here. I’m hoping 2024 sees significant
amounts of dedicated journalism on this topic.
My blog in 2023
Here’s a tag cloud for content I posted to my blog in 2023 (generated using Django
SQL Dashboard):'
- 'The two main categories I see are people who think AI agents are obviously things
that go and act on your behalf—the travel agent model—and people who think in
terms of LLMs that have been given access to tools which they can run in a loop
as part of solving a problem. The term “autonomy” is often thrown into the mix
too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they are in
Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually “coming
soon”.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9692441461309548
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9583333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9583333333333334
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mafzaal/finetuned_arctic_ft")
# Run inference
sentences = [
'According to the context, what is one of the best applications of large language models (LLMs)?',
'A lot of people are excited about AI agents—an infuriatingly vague term that seems to be converging on “AI systems that can go away and act on your behalf”. We’ve been talking about them all year, but I’ve seen few if any examples of them running in production, despite lots of exciting prototypes.\nI think this is because of gullibility.\nCan we solve this? Honestly, I’m beginning to suspect that you can’t fully solve gullibility without achieving AGI. So it may be quite a while before those agent dreams can really start to come true!\nCode may be the best application\nOver the course of the year, it’s become increasingly clear that writing code is one of the things LLMs are most capable of.',
'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9167 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9167 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9167 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9692** |
| cosine_mrr@10 | 0.9583 |
| cosine_map@100 | 0.9583 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.18 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.14 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What significant development in Artificial Intelligence occurred in 2023 according to Simon Willison’s weblog?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>How does Simon Willison describe the relationship between Large Language Models and the broader field of Artificial Intelligence?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>What are some challenges mentioned in building large language models like GPT-4?</code> | <code>Large Language Models<br>They’re actually quite easy to build<br>You can run LLMs on your own devices<br>Hobbyists can build their own fine-tuned models<br>We don’t yet know how to build GPT-4<br>Vibes Based Development<br>LLMs are really smart, and also really, really dumb<br>Gullibility is the biggest unsolved problem<br>Code may be the best application<br>The ethics of this space remain diabolically complex<br>My blog in 2023</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9638 |
| 2.0 | 32 | 0.9539 |
| 3.0 | 48 | 0.9539 |
| 3.125 | 50 | 0.9539 |
| 4.0 | 64 | 0.9692 |
| 5.0 | 80 | 0.9692 |
| 6.0 | 96 | 0.9692 |
| 6.25 | 100 | 0.9539 |
| 7.0 | 112 | 0.9692 |
| 8.0 | 128 | 0.9692 |
| 9.0 | 144 | 0.9692 |
| 9.375 | 150 | 0.9692 |
| 10.0 | 160 | 0.9692 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
cvoffer/991c9060-c99e-482b-a5a0-71bd43e3acfb | cvoffer | 2025-05-03T15:33:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T14:49:43Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 991c9060-c99e-482b-a5a0-71bd43e3acfb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7f381c5f243a3b63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f381c5f243a3b63_train_data.json
type:
field_input: critic_prompt
field_instruction: prompt
field_output: init_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: cvoffer/991c9060-c99e-482b-a5a0-71bd43e3acfb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/7f381c5f243a3b63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1031246b-4c91-4dd7-b3f7-0b6440762bad
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 1031246b-4c91-4dd7-b3f7-0b6440762bad
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 991c9060-c99e-482b-a5a0-71bd43e3acfb
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0377 | 0.0360 | 150 | 0.9944 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_S-GGUF | Triangle104 | 2025-05-03T15:32:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-03T14:51:44Z | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_S-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
---
Llama-3.1-Nemotron-Nano-8B-v1 is a large language model (LLM) which is a derivative of Meta Llama-3.1-8B-Instruct
(AKA the reference model). It is a reasoning model that is post trained
for reasoning, human chat preferences, and tasks, such as RAG and tool
calling.
Llama-3.1-Nemotron-Nano-8B-v1 is a model which offers a great
tradeoff between model accuracy and efficiency. It is created from Llama
3.1 8B Instruct and offers improvements in model accuracy. The model
fits on a single RTX GPU and can be used locally. The model supports a
context length of 128K.
This model underwent a multi-phase post-training process to enhance
both its reasoning and non-reasoning capabilities. This includes a
supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling
as well as multiple reinforcement learning (RL) stages using REINFORCE
(RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms
for both chat and instruction-following. The final model checkpoint is
obtained after merging the final SFT and Online RPO checkpoints.
Improved using Qwen.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-Nemotron-Nano-8B-v1-Q5_K_S-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q5_k_s.gguf -c 2048
```
|
Full-Paro-Aarti-Viral-XX-Video/Viral.X.X.Video.paro.aarti.viral.vdeo.mms.link.twitter | Full-Paro-Aarti-Viral-XX-Video | 2025-05-03T15:32:04Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T15:31:50Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
sweet-sour-pig/thro | sweet-sour-pig | 2025-05-03T15:30:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T10:22:51Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: thro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thro
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:|
| 0.0 | 1.0 | 125 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0 | 2.0 | 250 | 0.0000 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
bodam/Llama-3.2-1B-ko_wiki-4bit-753 | bodam | 2025-05-03T15:27:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T15:22:56Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bodam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Denx87/MOYS | Denx87 | 2025-05-03T15:22:00Z | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-03T15:19:35Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: photograph of beautiful indonesian women, wearing sexy green lingerie
output:
url: images/flux-lora_e000001.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: moys
---
# MOYS
<Gallery />
## Model description

## Trigger words
You should use `moys` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Denx87/MOYS/tree/main) them in the Files & versions tab.
|
mlx-community/Qwen3-8B-Base-bf16 | mlx-community | 2025-05-03T15:15:47Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-03T15:11:03Z | ---
license: apache-2.0
library_name: mlx
tags:
- mlx
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B-Base
---
# mlx-community/Qwen3-8B-Base-bf16
This model [mlx-community/Qwen3-8B-Base-bf16](https://huggingface.co/mlx-community/Qwen3-8B-Base-bf16) was
converted to MLX format from [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-8B-Base-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
KriptoUzmani/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo | KriptoUzmani | 2025-05-03T15:14:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am squeaky trotting komodo",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-32B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T05:26:54Z | ---
base_model: Gensyn/Qwen2.5-32B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am squeaky trotting komodo
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo
This model is a fine-tuned version of [Gensyn/Qwen2.5-32B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-32B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KriptoUzmani/Qwen2.5-32B-Instruct-bnb-4bit-Gensyn-Swarm-squeaky_trotting_komodo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
xingyu1996/chinese-poems-gpt2 | xingyu1996 | 2025-05-03T15:12:35Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"chinese",
"poetry",
"zh",
"license:mit",
"region:us"
] | null | 2025-05-03T14:33:07Z | ---
language: zh
tags:
- gpt2
- chinese
- poetry
license: mit
---
# Chinese Poetry GPT-2 Model
这是一个基于GPT-2架构的中文古诗生成模型,经过微调可以生成中国古诗词。
## 用法
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("xingyu1996/chinese-poems-gpt2")
model = GPT2LMHeadModel.from_pretrained("xingyu1996/chinese-poems-gpt2")
# 生成诗歌
input_text = "作者:李白\n标题:望月\n\n"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
output = model.generate(
input_ids,
max_length=100,
num_return_sequences=1,
no_repeat_ngram_size=2,
top_k=50,
top_p=0.95,
temperature=0.7
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
## 训练数据
模型在中国古诗词数据集上进行了微调,包含了唐诗宋词等经典作品。
## 限制
模型生成的诗歌可能不完全符合古诗词的韵律和格式要求。
|
anilarslan/aa | anilarslan | 2025-05-03T15:11:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:49:46Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joesharratt1229/external-llama-3b | joesharratt1229 | 2025-05-03T15:11:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:52:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dzanbek/31f40443-6a03-4462-8652-7009c6c1403e | dzanbek | 2025-05-03T15:08:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T14:44:35Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31f40443-6a03-4462-8652-7009c6c1403e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7f381c5f243a3b63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f381c5f243a3b63_train_data.json
type:
field_input: critic_prompt
field_instruction: prompt
field_output: init_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/31f40443-6a03-4462-8652-7009c6c1403e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/7f381c5f243a3b63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1031246b-4c91-4dd7-b3f7-0b6440762bad
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 1031246b-4c91-4dd7-b3f7-0b6440762bad
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 31f40443-6a03-4462-8652-7009c6c1403e
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7042 | 0.0384 | 200 | 0.6457 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
eRogueca/distilbert-rotten-tomatoes | eRogueca | 2025-05-03T15:07:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T15:05:57Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.8.0.dev20250503+cu128
- Datasets 3.5.1
- Tokenizers 0.21.1
|
infogep/cf8ffb81-d148-43ec-8f11-c09431b5d71b | infogep | 2025-05-03T15:07:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T14:44:17Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cf8ffb81-d148-43ec-8f11-c09431b5d71b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7f381c5f243a3b63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7f381c5f243a3b63_train_data.json
type:
field_input: critic_prompt
field_instruction: prompt
field_output: init_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: infogep/cf8ffb81-d148-43ec-8f11-c09431b5d71b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/7f381c5f243a3b63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1031246b-4c91-4dd7-b3f7-0b6440762bad
wandb_project: s56-30
wandb_run: your_name
wandb_runid: 1031246b-4c91-4dd7-b3f7-0b6440762bad
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cf8ffb81-d148-43ec-8f11-c09431b5d71b
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.738 | 0.0384 | 200 | 0.6780 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF | spacematt | 2025-05-03T14:48:43Z | 46 | 0 | transformers | [
"transformers",
"gguf",
"gemma3",
"gemma",
"google",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3-12b-it-qat-q4_0-unquantized",
"base_model:quantized:google/gemma-3-12b-it-qat-q4_0-unquantized",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-04-21T15:22:43Z | ---
base_model: google/gemma-3-12b-it-qat-q4_0-unquantized
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- gemma3
- gemma
- google
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-12b-it-qat-q4_0-unquantized`](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-unquantized) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo spacematt/gemma-3-12b-it-qat-q4_0-unquantized-Q5_K_M-GGUF --hf-file gemma-3-12b-it-qat-q4_0-unquantized-q5_k_m.gguf -c 2048
```
|
JulesGo/vit_focus_full | JulesGo | 2025-05-03T14:42:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"importance_model",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:43:24Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: vit_focus_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_focus_full
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0531
- Mse: 0.1291
- Mae: 0.3119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| 0.3146 | 0.9855 | 51 | 0.0595 | 0.1403 | 0.3265 |
| 0.2488 | 1.9855 | 102 | 0.0566 | 0.1395 | 0.3253 |
| 0.2278 | 2.9855 | 153 | 0.0611 | 0.1426 | 0.3288 |
| 0.206 | 3.9855 | 204 | 0.0536 | 0.1323 | 0.3180 |
| 0.1902 | 4.9855 | 255 | 0.0619 | 0.1411 | 0.3271 |
| 0.187 | 5.9855 | 306 | 0.0508 | 0.1320 | 0.3169 |
| 0.1757 | 6.9855 | 357 | 0.0537 | 0.1339 | 0.3183 |
| 0.1523 | 7.9855 | 408 | 0.0558 | 0.1330 | 0.3168 |
| 0.1528 | 8.9855 | 459 | 0.0591 | 0.1381 | 0.3225 |
| 0.1416 | 9.9855 | 510 | 0.0536 | 0.1353 | 0.3198 |
| 0.1298 | 10.9855 | 561 | 0.0530 | 0.1325 | 0.3164 |
| 0.1161 | 11.9855 | 612 | 0.0511 | 0.1315 | 0.3156 |
| 0.1085 | 12.9855 | 663 | 0.0531 | 0.1385 | 0.3243 |
| 0.1028 | 13.9855 | 714 | 0.0530 | 0.1316 | 0.3151 |
| 0.0891 | 14.9855 | 765 | 0.0540 | 0.1338 | 0.3178 |
| 0.0878 | 15.9855 | 816 | 0.0536 | 0.1335 | 0.3177 |
| 0.077 | 16.9855 | 867 | 0.0534 | 0.1299 | 0.3132 |
| 0.0769 | 17.9855 | 918 | 0.0549 | 0.1313 | 0.3149 |
| 0.0663 | 18.9855 | 969 | 0.0531 | 0.1291 | 0.3119 |
| 0.064 | 19.9855 | 1020 | 0.0540 | 0.1352 | 0.3197 |
| 0.0608 | 20.9855 | 1071 | 0.0535 | 0.1334 | 0.3179 |
| 0.0548 | 21.9855 | 1122 | 0.0529 | 0.1299 | 0.3134 |
| 0.0517 | 22.9855 | 1173 | 0.0534 | 0.1310 | 0.3152 |
| 0.0498 | 23.9855 | 1224 | 0.0544 | 0.1314 | 0.3151 |
| 0.047 | 24.9855 | 1275 | 0.0531 | 0.1309 | 0.3145 |
| 0.0443 | 25.9855 | 1326 | 0.0537 | 0.1325 | 0.3164 |
| 0.042 | 26.9855 | 1377 | 0.0533 | 0.1319 | 0.3156 |
| 0.0397 | 27.9855 | 1428 | 0.0530 | 0.1317 | 0.3155 |
| 0.0411 | 28.9855 | 1479 | 0.0542 | 0.1328 | 0.3167 |
| 0.0382 | 29.9855 | 1530 | 0.0533 | 0.1327 | 0.3166 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
dgambettaphd/M_llm2_gen6_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-03T14:41:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:41:20Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prafulgulani/email_classification_bert | prafulgulani | 2025-05-03T14:41:31Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T13:56:35Z | ---
license: apache-2.0
---
|
ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004 | ASethi04 | 2025-05-03T14:40:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:46:48Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-gsm8k-first-lora-4-0.0004", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/24q7yukg)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
cgato/Nemo-12b-Humanize-KTO-Experimental-2 | cgato | 2025-05-03T14:39:28Z | 26 | 2 | null | [
"safetensors",
"mistral",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-24T01:58:26Z | ---
license: cc-by-nc-4.0
---
A public repo where I'll put my latest KTO results for the v0.2.4 version of my Humanize model. Working towards a final version, but I figure I'll throw this out there.
Similar to the first, the model is unusable without KTO, though it slightly more coherent at base.
This model gives longer responses than the previous, and isn't great at basic chatting. I'll try to update it a bit less spastically but no promises.
## Basic Sampler Settings
Temp: 0.8
TopK: 80
TopP: 0.9
RepPen: 1.1
Formatting Settings for Silly Tavern. It should end up as ChatML with username/character names.
 |
MinaMila/phi3_LoRa_ACSEmployment_2_cfda_ep4_22 | MinaMila | 2025-05-03T14:38:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T00:35:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/d1_math_all_large | mlfoundations-dev | 2025-05-03T14:36:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T02:02:51Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_all_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_all_large
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_all_large dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 64
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Thejina/Llama-3.1-8B-bnb-16bit-python-F16-GGUF | Thejina | 2025-05-03T14:36:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:Thejina/Llama-3.1-8B-bnb-16bit-python",
"base_model:quantized:Thejina/Llama-3.1-8B-bnb-16bit-python",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:36:05Z | ---
base_model: Thejina/Llama-3.1-8B-bnb-16bit-python
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# Thejina/Llama-3.1-8B-bnb-16bit-python-F16-GGUF
This LoRA adapter was converted to GGUF format from [`Thejina/Llama-3.1-8B-bnb-16bit-python`](https://huggingface.co/Thejina/Llama-3.1-8B-bnb-16bit-python) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Thejina/Llama-3.1-8B-bnb-16bit-python) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Llama-3.1-8B-bnb-16bit-python-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Llama-3.1-8B-bnb-16bit-python-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001 | ASethi04 | 2025-05-03T14:32:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:23:26Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-opc-sft-first-lora-4-0.0001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/0eo5c0tu)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lokeshe09/Qwen_unsloth_finetune | lokeshe09 | 2025-05-03T14:31:49Z | 0 | 0 | transformers | [
"transformers",
"qwen2_5_vl",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-03T14:26:18Z | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** lokeshe09
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hZzy/mistral-7b-expo-7b-IPO-25-last-try-3 | hZzy | 2025-05-03T14:30:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/direction_right2",
"base_model:hZzy/mistral-7b-sft-25-1",
"base_model:adapter:hZzy/mistral-7b-sft-25-1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T06:37:08Z | ---
base_model: hZzy/mistral-7b-sft-25-1
datasets:
- hZzy/direction_right2
library_name: peft
license: apache-2.0
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
model-index:
- name: mistral-7b-expo-7b-IPO-25-last-try-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-expo-7b-IPO-25-last-try-3
This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1) on the hZzy/direction_right2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2023
- Objective: 4.2488
- Logp Accuracy: 0.5302
- Log Diff Policy: 1.3478
- Chosen Logps: -91.7249
- Rejected Logps: -93.0727
- Logits: -2.1963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 12
- total_train_batch_size: 108
- total_eval_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Objective | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Logits |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------------:|:---------------:|:------------:|:--------------:|:-------:|
| 4.9919 | 0.0758 | 50 | 4.9760 | 4.9709 | 0.5162 | 0.4112 | -94.0221 | -94.4334 | -2.1986 |
| 4.776 | 0.1517 | 100 | 4.8183 | 4.8138 | 0.5179 | 0.5890 | -92.3714 | -92.9604 | -2.1918 |
| 4.6242 | 0.2275 | 150 | 4.4994 | 4.5227 | 0.5215 | 0.9245 | -93.1799 | -94.1044 | -2.1806 |
| 4.1446 | 0.3033 | 200 | 4.3579 | 4.3849 | 0.5296 | 1.1231 | -91.9536 | -93.0767 | -2.2051 |
| 4.0524 | 0.3792 | 250 | 4.3097 | 4.3450 | 0.5305 | 1.2282 | -91.9540 | -93.1822 | -2.2202 |
| 4.0649 | 0.4550 | 300 | 4.2852 | 4.3276 | 0.5280 | 1.2893 | -91.4984 | -92.7877 | -2.2246 |
| 3.9029 | 0.5308 | 350 | 4.2732 | 4.3033 | 0.5296 | 1.2748 | -91.2104 | -92.4852 | -2.2165 |
| 3.7523 | 0.6067 | 400 | 4.2382 | 4.2802 | 0.5330 | 1.3602 | -89.9468 | -91.3069 | -2.1992 |
| 3.9188 | 0.6825 | 450 | 4.2282 | 4.2752 | 0.5271 | 1.2815 | -89.9520 | -91.2335 | -2.2066 |
| 3.924 | 0.7583 | 500 | 4.2284 | 4.2881 | 0.5282 | 1.3782 | -90.7221 | -92.1003 | -2.1915 |
| 3.8564 | 0.8342 | 550 | 4.2347 | 4.3006 | 0.5296 | 1.3164 | -90.8981 | -92.2144 | -2.1985 |
| 3.7736 | 0.9100 | 600 | 4.2372 | 4.3025 | 0.5291 | 1.3030 | -90.7523 | -92.0554 | -2.2105 |
| 3.9957 | 0.9858 | 650 | 4.2052 | 4.2543 | 0.5313 | 1.3570 | -91.9905 | -93.3475 | -2.2042 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.20.3 |
Yuuta208/Qwen2.5-7B-Instruct-Qwen2.5-Math-7B-Merged-della-27 | Yuuta208 | 2025-05-03T14:29:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:merge:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:26:16Z | ---
base_model:
- Qwen/Qwen2.5-7B
- Qwen/Qwen2.5-7B-Instruct
- Qwen/Qwen2.5-Math-7B
library_name: transformers
tags:
- mergekit
- merge
---
# output_model_della
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
* [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-7B-Instruct
parameters:
weight: 0.5
- model: Qwen/Qwen2.5-Math-7B
parameters:
weight: 0.6
merge_method: della
base_model: Qwen/Qwen2.5-7B
parameters:
density: 0.8
normalize: true
int8_mask: true
dtype: float16
```
|
wellepatts7t/sdfvdfv | wellepatts7t | 2025-05-03T14:28:32Z | 0 | 0 | null | [
"license:bsd-3-clause-clear",
"region:us"
] | null | 2025-05-03T14:28:32Z | ---
license: bsd-3-clause-clear
---
|
mlfoundations-dev/d1_math_longest_0.3k | mlfoundations-dev | 2025-05-03T14:26:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T12:50:13Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_longest_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_longest_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_longest_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Triangle104/DareQwen-2.5-7B-Q8_0-GGUF | Triangle104 | 2025-05-03T14:26:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Locutusque/DareQwen-2.5-7B",
"base_model:quantized:Locutusque/DareQwen-2.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T14:26:13Z | ---
base_model: Locutusque/DareQwen-2.5-7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/DareQwen-2.5-7B-Q8_0-GGUF
This model was converted to GGUF format from [`Locutusque/DareQwen-2.5-7B`](https://huggingface.co/Locutusque/DareQwen-2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Locutusque/DareQwen-2.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/DareQwen-2.5-7B-Q8_0-GGUF --hf-file dareqwen-2.5-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/DareQwen-2.5-7B-Q8_0-GGUF --hf-file dareqwen-2.5-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DareQwen-2.5-7B-Q8_0-GGUF --hf-file dareqwen-2.5-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/DareQwen-2.5-7B-Q8_0-GGUF --hf-file dareqwen-2.5-7b-q8_0.gguf -c 2048
```
|
MrRobotoAI/A21 | MrRobotoAI | 2025-05-03T14:20:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"base_model:MrRobotoAI/A3",
"base_model:merge:MrRobotoAI/A3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:17:18Z | ---
base_model:
- MrRobotoAI/A2
- MrRobotoAI/A3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A3](https://huggingface.co/MrRobotoAI/A3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/A2
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/A3
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/A2
dtype: bfloat16
```
|
ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-second-lora-4-0.0001-same-prompt-template | ASethi04 | 2025-05-03T14:19:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T12:37:13Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: meta-llama-Llama-3.1-8B-pubmedqa-second-lora-4-0.0001-same-prompt-template
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.1-8B-pubmedqa-second-lora-4-0.0001-same-prompt-template
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-pubmedqa-second-lora-4-0.0001-same-prompt-template", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/k6zr3x0o)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MrRobotoAI/A20 | MrRobotoAI | 2025-05-03T14:17:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/A1",
"base_model:merge:MrRobotoAI/A1",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:14:16Z | ---
base_model:
- MrRobotoAI/A1
- MrRobotoAI/A2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A1](https://huggingface.co/MrRobotoAI/A1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/A2
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/A1
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/A2
dtype: bfloat16
```
|
cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16 | cnfusion | 2025-05-03T14:16:53Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"base_model:deepseek-ai/DeepSeek-Prover-V2-7B",
"base_model:finetune:deepseek-ai/DeepSeek-Prover-V2-7B",
"region:us"
] | null | 2025-05-03T14:16:07Z | ---
base_model: deepseek-ai/DeepSeek-Prover-V2-7B
tags:
- mlx
---
# cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16
The Model [cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16](https://huggingface.co/cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16) was converted to MLX format from [deepseek-ai/DeepSeek-Prover-V2-7B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/DeepSeek-Prover-V2-7B-mlx-fp16")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF | mradermacher | 2025-05-03T14:13:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"retrieval-augmented-generation",
"text-generation-inference",
"vi",
"base_model:AITeamVN/GRPO-VI-Qwen2-7B-RAG",
"base_model:quantized:AITeamVN/GRPO-VI-Qwen2-7B-RAG",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-03T09:57:48Z | ---
base_model: AITeamVN/GRPO-VI-Qwen2-7B-RAG
language:
- vi
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- retrieval-augmented-generation
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/AITeamVN/GRPO-VI-Qwen2-7B-RAG
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF | mradermacher | 2025-05-03T14:12:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"retrieval-augmented-generation",
"text-generation-inference",
"vi",
"base_model:AITeamVN/GRPO-VI-Qwen2-7B-RAG",
"base_model:quantized:AITeamVN/GRPO-VI-Qwen2-7B-RAG",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:54:13Z | ---
base_model: AITeamVN/GRPO-VI-Qwen2-7B-RAG
language:
- vi
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- retrieval-augmented-generation
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AITeamVN/GRPO-VI-Qwen2-7B-RAG
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-VI-Qwen2-7B-RAG-GGUF/resolve/main/GRPO-VI-Qwen2-7B-RAG.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlfoundations-dev/d1_math_gpt_0.3k | mlfoundations-dev | 2025-05-03T14:12:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T12:41:26Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_gpt_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_gpt_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_gpt_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
jahyungu/Llama-3.1-8B-Instruct_MetaMathQA-40K_cluster9 | jahyungu | 2025-05-03T14:11:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T09:45:45Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct_MetaMathQA-40K_cluster9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct_MetaMathQA-40K_cluster9
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
MrRobotoAI/A18 | MrRobotoAI | 2025-05-03T14:11:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/153",
"base_model:merge:MrRobotoAI/153",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T14:08:10Z | ---
base_model:
- MrRobotoAI/A2
- MrRobotoAI/153
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/153](https://huggingface.co/MrRobotoAI/153)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/A2
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/153
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/A2
dtype: bfloat16
```
|
Triangle104/DareQwen-2.5-7B-Q6_K-GGUF | Triangle104 | 2025-05-03T14:10:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Locutusque/DareQwen-2.5-7B",
"base_model:quantized:Locutusque/DareQwen-2.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T14:10:29Z | ---
base_model: Locutusque/DareQwen-2.5-7B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/DareQwen-2.5-7B-Q6_K-GGUF
This model was converted to GGUF format from [`Locutusque/DareQwen-2.5-7B`](https://huggingface.co/Locutusque/DareQwen-2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Locutusque/DareQwen-2.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/DareQwen-2.5-7B-Q6_K-GGUF --hf-file dareqwen-2.5-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/DareQwen-2.5-7B-Q6_K-GGUF --hf-file dareqwen-2.5-7b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/DareQwen-2.5-7B-Q6_K-GGUF --hf-file dareqwen-2.5-7b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/DareQwen-2.5-7B-Q6_K-GGUF --hf-file dareqwen-2.5-7b-q6_k.gguf -c 2048
```
|
Subsets and Splits