modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 06:27:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 06:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Kitti6913/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_clawed_mammoth | Kitti6913 | 2025-05-02T20:17:01Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am beaked clawed mammoth",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-03T21:49:47Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_clawed_mammoth
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am beaked clawed mammoth
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_clawed_mammoth
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Kitti6913/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_clawed_mammoth", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Video-gangu-chettri-kanda-7-2-link-One-Da/original.Video.btswiki.com.paro.aarti.viral.video.link.original.twitter | Video-gangu-chettri-kanda-7-2-link-One-Da | 2025-05-02T20:15:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T20:14:43Z | Watch 🟢 ➤ ➤ ➤ <a href="https://selfconfidenceisthebest.blogspot.com/?m=0
"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="https://selfconfidenceisthebest.blogspot.com/?m=0
"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ |
Video-gangu-chettri-kanda-7-2-link-One-Da/Chitra-Tripathi-Viral-Video-Trending-Videos | Video-gangu-chettri-kanda-7-2-link-One-Da | 2025-05-02T20:11:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T20:10:35Z | Watch 🟢 ➤ ➤ ➤ <a href="https://selfconfidenceisthebest.blogspot.com/?m=0
"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="https://selfconfidenceisthebest.blogspot.com/?m=0
"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
|
AlSamCur123/Llama-3.2-1B-Instruct | AlSamCur123 | 2025-05-02T20:08:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:AlSamCur123/Llama-3.2-1B-InstructContinuedFine",
"base_model:quantized:AlSamCur123/Llama-3.2-1B-InstructContinuedFine",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T20:06:36Z | ---
base_model: AlSamCur123/Llama-3.2-1B-InstructContinuedFine
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlSamCur123
- **License:** apache-2.0
- **Finetuned from model :** AlSamCur123/Llama-3.2-1B-InstructContinuedFine
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AndreasLH/Weak-Cube-R-CNN | AndreasLH | 2025-05-02T20:04:17Z | 0 | 0 | null | [
"arxiv:2504.13297",
"license:mit",
"region:us"
] | null | 2024-12-17T09:30:57Z | ---
license: mit
---
# Weakly supervised 3D Object Detection
Modelcard for the model implemented in the paper https://arxiv.org/abs/2504.13297
The full description of the model is at [Github](https://github.com/AndreasLH/Weak-Cube-R-CNN)
|
fedovtt/2d32fc24-12c5-4219-bc47-094559c3731e | fedovtt | 2025-05-02T20:03:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T19:08:58Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2d32fc24-12c5-4219-bc47-094559c3731e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7b62275ec2b93102_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7b62275ec2b93102_train_data.json
type:
field_input: user_prompt
field_instruction: system_prompt
field_output: prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: fedovtt/2d32fc24-12c5-4219-bc47-094559c3731e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/7b62275ec2b93102_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0fe4074c-9202-49b2-b5a2-5429bcecfdf5
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 0fe4074c-9202-49b2-b5a2-5429bcecfdf5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2d32fc24-12c5-4219-bc47-094559c3731e
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0631 | 0.0089 | 150 | 0.0610 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gradientrouting-spar/qwen_ft_doutcome_all_seed1_30Apr_gradclipping_epoch20_checkpoint | gradientrouting-spar | 2025-05-02T20:03:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T20:02:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/Qwen2-VL-OCR-2B-Instruct | prithivMLmods | 2025-05-02T20:00:54Z | 51,884 | 65 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"Math",
"OCR",
"Latex",
"VLM",
"Plain_Text",
"KIE",
"Equations",
"VQA",
"conversational",
"en",
"dataset:unsloth/LaTeX_OCR",
"dataset:linxy/LaTeX_OCR",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-12-19T01:57:34Z | ---
license: apache-2.0
datasets:
- unsloth/LaTeX_OCR
- linxy/LaTeX_OCR
language:
- en
base_model:
- Qwen/Qwen2-VL-2B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- Math
- OCR
- Latex
- VLM
- Plain_Text
- KIE
- Equations
- VQA
---
# **Qwen2-VL-OCR-2B-Instruct [ VL / OCR ]**

> The **Qwen2-VL-OCR-2B-Instruct** model is a fine-tuned version of **Qwen/Qwen2-VL-2B-Instruct**, tailored for tasks that involve **Optical Character Recognition (OCR)**, **image-to-text conversion**, and **math problem solving with LaTeX formatting**. This model integrates a conversational approach with visual and textual understanding to handle multi-modal tasks effectively.
[](https://huggingface.co/prithivMLmods/Qwen2-VL-OCR-2B-Instruct/blob/main/Demo/ocrtest_qwen.ipynb)
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
### Sample Inference

| **File Name** | **Size** | **Description** | **Upload Status** |
|---------------------------|------------|------------------------------------------------|-------------------|
| `.gitattributes` | 1.52 kB | Configures LFS tracking for specific model files. | Initial commit |
| `README.md` | 203 Bytes | Minimal details about the uploaded model. | Updated |
| `added_tokens.json` | 408 Bytes | Additional tokens used by the model tokenizer. | Uploaded |
| `chat_template.json` | 1.05 kB | Template for chat-based model input/output. | Uploaded |
| `config.json` | 1.24 kB | Model configuration metadata. | Uploaded |
| `generation_config.json` | 252 Bytes | Configuration for text generation settings. | Uploaded |
| `merges.txt` | 1.82 MB | BPE merge rules for tokenization. | Uploaded |
| `model.safetensors` | 4.42 GB | Serialized model weights in a secure format. | Uploaded (LFS) |
| `preprocessor_config.json`| 596 Bytes | Preprocessing configuration for input data. | Uploaded |
| `vocab.json` | 2.78 MB | Vocabulary file for tokenization. | Uploaded |
---
### How to Use
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"prithivMLmods/Qwen2-VL-OCR-2B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "prithivMLmods/Qwen2-VL-OCR-2B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen2-VL-OCR-2B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
### Buf
```python
buffer = ""
for new_text in streamer:
buffer += new_text
# Remove <|im_end|> or similar tokens from the output
buffer = buffer.replace("<|im_end|>", "")
yield buffer
```
### **Key Features**
1. **Vision-Language Integration:**
- Combines **image understanding** with **natural language processing** to convert images into text.
2. **Optical Character Recognition (OCR):**
- Extracts and processes textual information from images with high accuracy.
3. **Math and LaTeX Support:**
- Solves math problems and outputs equations in **LaTeX format**.
4. **Conversational Capabilities:**
- Designed to handle **multi-turn interactions**, providing context-aware responses.
5. **Image-Text-to-Text Generation:**
- Inputs can include **images, text, or a combination**, and the model generates descriptive or problem-solving text.
6. **Secure Weight Format:**
- Uses **Safetensors** for faster and more secure model weight loading.
---
### **Training Details**
- **Base Model:** [Qwen/Qwen2-VL-2B-Instruct](#)
- **Model Size:**
- 2.21 Billion parameters
- Optimized for **BF16** tensor type, enabling efficient inference.
- **Specializations:**
- OCR tasks in images containing text.
- Mathematical reasoning and LaTeX output for equations.
--- |
jeanvit/Llama-3.1-SuperSwallow-70B-Instruct-v0.1-GGUF | jeanvit | 2025-05-02T20:00:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"chat",
"text-generation",
"ja",
"en",
"arxiv:2212.04089",
"base_model:allenai/Llama-3.1-Tulu-3-70B",
"base_model:merge:allenai/Llama-3.1-Tulu-3-70B",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:merge:meta-llama/Llama-3.1-70B",
"base_model:tokyotech-llm/Llama-3.1-Swallow-70B-v0.1",
"base_model:merge:tokyotech-llm/Llama-3.1-Swallow-70B-v0.1",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T18:32:31Z | ---
base_model:
- meta-llama/Llama-3.1-70B
- allenai/Llama-3.1-Tulu-3-70B
- tokyotech-llm/Llama-3.1-Swallow-70B-v0.1
library_name: transformers
tags:
- mergekit
- merge
- chat
language:
- ja
- en
pipeline_tag: text-generation
license: llama3.1
---
# Llama-3.1-SuperSwallow-70B-Instruct-v0.1
This is a GGUF bf16 version of [nitky/Llama-3.1-SuperSwallow-70B-Instruct-v0.1](https://huggingface.co/nitky/Llama-3.1-SuperSwallow-70B-Instruct-v0.1)
Converted using llama.cpp
> [Open Japanese LLM Leaderboard](https://huggingface.co/spaces/llm-jp/open-japanese-llm-leaderboard) 🏆 Rank1 2024/12/03
🙏 Big thank you to [@tokyotech-llm](https://huggingface.co/tokyotech-llm) and [@allenai](https://huggingface.co/allenai).

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Test environment
This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` with temperature=1 for Generation.
## Usage
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
The template used to construct a prompt for the instruct model is specified as follows:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{SYSTEM_PROMPT}<|eot_id|><|start_header_id|>user<|end_header_id|>
{USER_MESSAGE}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。" or "You are a helpful assistant."
For the "{USER_MESSAGE}" part, We recommend using {instruction}\n{input}
In other words, We recommend the following:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
あなたは誠実で優秀な日本人のアシスタントです。<|eot_id|><|start_header_id|>user<|end_header_id|>
{instruction}
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Use the instruct model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "nitky/Llama-3.1-SuperSwallow-70B-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B) as a base.
### Models Merged
The following models were included in the merge:
* [allenai/Llama-3.1-Tulu-3-70B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B)
* [tokyotech-llm/Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
base_model: meta-llama/Llama-3.1-70B
models:
- model: tokyotech-llm/Llama-3.1-Swallow-70B-v0.1
parameters:
weight: 1.0
- model: allenai/Llama-3.1-Tulu-3-70B
parameters:
weight: 0.8
dtype: bfloat16
name: Llama-3.1-SuperSwallow-70B-Instruct-v0.1
```
|
OddTheGreat/Cogwheel_24B_V.1 | OddTheGreat | 2025-05-02T19:59:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"creative",
"conversational",
"en",
"ru",
"base_model:OddTheGreat/Core_24B_V.1",
"base_model:merge:OddTheGreat/Core_24B_V.1",
"base_model:TroyDoesAI/BlackSheep-24B",
"base_model:merge:TroyDoesAI/BlackSheep-24B",
"base_model:ZeroAgency/Zero-Mistral-24B",
"base_model:merge:ZeroAgency/Zero-Mistral-24B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T09:57:53Z | ---
base_model:
- ZeroAgency/Zero-Mistral-24B
- OddTheGreat/Core_24B_V.1
- TroyDoesAI/BlackSheep-24B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- creative
language:
- en
- ru
---
# merge
This is a merge of pre-trained language models
Goal of this merge was to improve Core's abilities to russian language and to make it better on 'logical' field.
Model is still great as narrator or setting, it seems like now model notices and use even smallest details in description. Model follows instructions and rules well, it is creative when it needs to be, and "smart" enough. Model sometimes will reply for you, especially if user is mentioned in char card, however it fixes by turning on instruct template or by a few swipes. I tested some my overengineered RU cards, RU really improved, and works good if used as assistant, but still worse than EN for roleplay.
ERP was tested too, no problems spotted.
While i tested not fully translated to RU char cards, (model works good with them, but prone to language switch in beginning), i was visited by mad idea: test fully ru card with EN first message. And at least on this model it worked, and worked good, model understands card good enough, and with EN output it give life to my lazy old RU cards.
Tested on Q4_K_M, ~600 replies, T 1.04, xtc 0.1 0.2, Mistral template. |
hZzy/mistral-7b-expo-7b-L2EXPO-25-last-try-3 | hZzy | 2025-05-02T19:54:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"ndcg",
"trl",
"expo",
"generated_from_trainer",
"dataset:hZzy/direction_right2",
"base_model:hZzy/mistral-7b-sft-25-1",
"base_model:adapter:hZzy/mistral-7b-sft-25-1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T12:56:02Z | ---
base_model: hZzy/mistral-7b-sft-25-1
datasets:
- hZzy/direction_right2
library_name: peft
license: apache-2.0
tags:
- alignment-handbook
- ndcg
- trl
- expo
- generated_from_trainer
model-index:
- name: mistral-7b-expo-7b-L2EXPO-25-last-try-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-expo-7b-L2EXPO-25-last-try-3
This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1) on the hZzy/direction_right2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4658
- Objective: 0.4672
- Logp Accuracy: 0.5383
- Log Diff Policy: 1.7383
- Chosen Logps: -87.9379
- Rejected Logps: -89.6762
- Logits: -2.1598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 12
- total_train_batch_size: 108
- total_eval_batch_size: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Objective | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Logits |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------------:|:---------------:|:------------:|:--------------:|:-------:|
| 0.5851 | 0.0758 | 50 | 0.5116 | 0.5083 | 0.5176 | 0.4092 | -94.0108 | -94.4201 | -2.1991 |
| 0.5966 | 0.1517 | 100 | 0.5034 | 0.4990 | 0.5176 | 0.5755 | -93.3520 | -93.9275 | -2.1967 |
| 0.6151 | 0.2275 | 150 | 0.4887 | 0.4871 | 0.5243 | 0.9455 | -92.3973 | -93.3428 | -2.1785 |
| 0.535 | 0.3033 | 200 | 0.4802 | 0.4802 | 0.5294 | 1.1981 | -90.9496 | -92.1477 | -2.1920 |
| 0.5207 | 0.3792 | 250 | 0.4757 | 0.4777 | 0.5333 | 1.3292 | -92.1322 | -93.4614 | -2.2157 |
| 0.5078 | 0.4550 | 300 | 0.4723 | 0.4743 | 0.5341 | 1.4968 | -90.7397 | -92.2365 | -2.2200 |
| 0.4984 | 0.5308 | 350 | 0.4688 | 0.4694 | 0.5324 | 1.5296 | -90.5128 | -92.0423 | -2.2029 |
| 0.47 | 0.6067 | 400 | 0.4664 | 0.4678 | 0.5352 | 1.6503 | -91.2065 | -92.8568 | -2.1720 |
| 0.4747 | 0.6825 | 450 | 0.4641 | 0.4656 | 0.5336 | 1.5967 | -89.2100 | -90.8067 | -2.1762 |
| 0.5021 | 0.7583 | 500 | 0.4733 | 0.4756 | 0.5338 | 1.6804 | -86.5736 | -88.2540 | -2.1824 |
| 0.4333 | 0.8342 | 550 | 0.4653 | 0.4679 | 0.5380 | 1.7131 | -87.7538 | -89.4669 | -2.1945 |
| 0.4559 | 0.9100 | 600 | 0.4618 | 0.4649 | 0.5324 | 1.5970 | -89.3538 | -90.9508 | -2.1806 |
| 0.494 | 0.9858 | 650 | 0.4639 | 0.4652 | 0.5355 | 1.7384 | -90.7586 | -92.4970 | -2.1616 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.20.3 |
acezxn/SOC_Task_Generation_Base_Llama_3B | acezxn | 2025-05-02T19:54:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T19:45:56Z | ---
base_model: model
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** acezxn
- **License:** apache-2.0
- **Finetuned from model :** model
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FadedCalendula/rl-course | FadedCalendula | 2025-05-02T19:49:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-02T19:49:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO_simple
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.41 +/- 38.56
name: mean_reward
verified: false
---
# **PPO_simple** Agent playing **LunarLander-v2**
This is a trained model of a **PPO_simple** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
deeponh/mal_9b_9b_L1 | deeponh | 2025-05-02T19:47:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T21:28:03Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/The-Omega-Directive-M-8B-v1.0-Q4_K_M-GGUF | Triangle104 | 2025-05-02T19:46:39Z | 0 | 0 | null | [
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:ReadyArt/The-Omega-Directive-M-8B-v1.0",
"base_model:finetune:ReadyArt/The-Omega-Directive-M-8B-v1.0",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-02T19:41:44Z | ---
base_model: ReadyArt/The-Omega-Directive-M-8B-v1.0
language:
- en
license: other
license_name: mrl
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
- llama-cpp
- gguf-my-repo
base_model_relation: finetune
---
# Triangle104/The-Omega-Directive-M-8B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`ReadyArt/The-Omega-Directive-M-8B-v1.0`](https://huggingface.co/ReadyArt/The-Omega-Directive-M-8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ReadyArt/The-Omega-Directive-M-8B-v1.0) for more details on the model.
---
This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion:
🧬 Expanded 22M Token Dataset - Incorporating 90 erotic novels and 6,496 kink scenarios
⚡ Optimized Architecture - Smoother training curve yields more intelligent outputs
💎 Balanced Depravity - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies
📜 Enhanced Character Piloting - Characters exhibit more nuanced personalities and motivations
🌹 Unexpected Depth - Occasionally surprises with profound insights amidst the debauchery
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q4_K_M-GGUF --hf-file the-omega-directive-m-8b-v1.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q4_K_M-GGUF --hf-file the-omega-directive-m-8b-v1.0-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q4_K_M-GGUF --hf-file the-omega-directive-m-8b-v1.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/The-Omega-Directive-M-8B-v1.0-Q4_K_M-GGUF --hf-file the-omega-directive-m-8b-v1.0-q4_k_m.gguf -c 2048
```
|
Nighat-Naz/Nighat.Naz.Viral.Video.Link | Nighat-Naz | 2025-05-02T19:43:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T19:40:07Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Nighat-Naz)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Nighat-Naz)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Nighat-Naz) |
OddTheGreat/Cogwheel_24B_V.1-Q5_K_M-GGUF | OddTheGreat | 2025-05-02T19:38:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"creative",
"llama-cpp",
"gguf-my-repo",
"en",
"ru",
"base_model:OddTheGreat/Cogwheel_24B_V.1",
"base_model:quantized:OddTheGreat/Cogwheel_24B_V.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T19:37:35Z | ---
base_model: OddTheGreat/Cogwheel_24B_V.1
language:
- en
- ru
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- creative
- llama-cpp
- gguf-my-repo
---
# OddTheGreat/Cogwheel_24B_V.1-Q5_K_M-GGUF
This model was converted to GGUF format from [`OddTheGreat/Cogwheel_24B_V.1`](https://huggingface.co/OddTheGreat/Cogwheel_24B_V.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OddTheGreat/Cogwheel_24B_V.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo OddTheGreat/Cogwheel_24B_V.1-Q5_K_M-GGUF --hf-file cogwheel_24b_v.1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo OddTheGreat/Cogwheel_24B_V.1-Q5_K_M-GGUF --hf-file cogwheel_24b_v.1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo OddTheGreat/Cogwheel_24B_V.1-Q5_K_M-GGUF --hf-file cogwheel_24b_v.1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo OddTheGreat/Cogwheel_24B_V.1-Q5_K_M-GGUF --hf-file cogwheel_24b_v.1-q5_k_m.gguf -c 2048
```
|
chchen/MentaLLaMA-chat-7B-PsyCourse-info-fold7 | chchen | 2025-05-02T19:37:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:klyang/MentaLLaMA-chat-7B-hf",
"base_model:adapter:klyang/MentaLLaMA-chat-7B-hf",
"license:mit",
"region:us"
] | null | 2025-05-02T18:30:00Z | ---
library_name: peft
license: mit
base_model: klyang/MentaLLaMA-chat-7B-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: MentaLLaMA-chat-7B-PsyCourse-info-fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MentaLLaMA-chat-7B-PsyCourse-info-fold7
This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-info-train-fold7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7247 | 0.3951 | 10 | 0.6556 |
| 0.2682 | 0.7901 | 20 | 0.2502 |
| 0.1596 | 1.1852 | 30 | 0.1875 |
| 0.1906 | 1.5802 | 40 | 0.1712 |
| 0.1383 | 1.9753 | 50 | 0.1562 |
| 0.1166 | 2.3704 | 60 | 0.1506 |
| 0.1287 | 2.7654 | 70 | 0.1456 |
| 0.1166 | 3.1605 | 80 | 0.1432 |
| 0.1051 | 3.5556 | 90 | 0.1415 |
| 0.0923 | 3.9506 | 100 | 0.1386 |
| 0.0944 | 4.3457 | 110 | 0.1384 |
| 0.0965 | 4.7407 | 120 | 0.1386 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
antarikshya/Agri-llama | antarikshya | 2025-05-02T19:36:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T19:30:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
El-Abisicari-Video/wATCH.El.Abisicari.viral.video.original | El-Abisicari-Video | 2025-05-02T19:31:09Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T19:28:26Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=El-Abisicari)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=El-Abisicari)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=El-Abisicari) |
zelk12/MT4-gemma-3-12B-Q6_K-GGUF | zelk12 | 2025-05-02T19:30:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:zelk12/MT4-gemma-3-12B",
"base_model:quantized:zelk12/MT4-gemma-3-12B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-02T19:29:51Z | ---
base_model: zelk12/MT4-gemma-3-12B
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# zelk12/MT4-gemma-3-12B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT4-gemma-3-12B`](https://huggingface.co/zelk12/MT4-gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT4-gemma-3-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT4-gemma-3-12B-Q6_K-GGUF --hf-file mt4-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT4-gemma-3-12B-Q6_K-GGUF --hf-file mt4-gemma-3-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT4-gemma-3-12B-Q6_K-GGUF --hf-file mt4-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT4-gemma-3-12B-Q6_K-GGUF --hf-file mt4-gemma-3-12b-q6_k.gguf -c 2048
```
|
pkmitl205/tinyBERT-Distill-WangchanBERTa | pkmitl205 | 2025-05-02T19:25:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T19:24:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kokovova/306f1d69-27c4-496f-8453-64dafca30648 | kokovova | 2025-05-02T19:21:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T19:11:11Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 306f1d69-27c4-496f-8453-64dafca30648
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7b62275ec2b93102_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7b62275ec2b93102_train_data.json
type:
field_input: user_prompt
field_instruction: system_prompt
field_output: prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/306f1d69-27c4-496f-8453-64dafca30648
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/7b62275ec2b93102_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0fe4074c-9202-49b2-b5a2-5429bcecfdf5
wandb_project: s56-4
wandb_run: your_name
wandb_runid: 0fe4074c-9202-49b2-b5a2-5429bcecfdf5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 306f1d69-27c4-496f-8453-64dafca30648
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0019 | 0.0095 | 200 | 0.0027 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
New-Tutorial-jobz-hunting/wATCH.TRENDING.VIDEO.Jobz.Hunting.Sajal.Malik.viral.video.Tutorial | New-Tutorial-jobz-hunting | 2025-05-02T19:21:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T19:18:28Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jobz-hunting)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?jobz-hunting)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting) |
chchen/Llama3-OpenBioLLM-8B-PsyCourse-info-fold8 | chchen | 2025-05-02T19:20:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:adapter:aaditya/Llama3-OpenBioLLM-8B",
"license:llama3",
"region:us"
] | null | 2025-05-02T18:22:30Z | ---
library_name: peft
license: llama3
base_model: aaditya/Llama3-OpenBioLLM-8B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama3-OpenBioLLM-8B-PsyCourse-info-fold8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3-OpenBioLLM-8B-PsyCourse-info-fold8
This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-info-train-fold8 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5756 | 0.3951 | 10 | 0.4141 |
| 0.3165 | 0.7901 | 20 | 0.2288 |
| 0.2057 | 1.1852 | 30 | 0.1790 |
| 0.1753 | 1.5802 | 40 | 0.1530 |
| 0.12 | 1.9753 | 50 | 0.1478 |
| 0.1055 | 2.3704 | 60 | 0.1453 |
| 0.1206 | 2.7654 | 70 | 0.1411 |
| 0.0681 | 3.1605 | 80 | 0.1397 |
| 0.0998 | 3.5556 | 90 | 0.1477 |
| 0.0925 | 3.9506 | 100 | 0.1453 |
| 0.0629 | 4.3457 | 110 | 0.1468 |
| 0.0494 | 4.7407 | 120 | 0.1467 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
ericwang07/blip-gqa-ft2 | ericwang07 | 2025-05-02T19:17:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"blip-2",
"visual-question-answering",
"generated_from_trainer",
"base_model:Salesforce/blip2-opt-2.7b",
"base_model:finetune:Salesforce/blip2-opt-2.7b",
"license:mit",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2025-04-23T14:21:39Z | ---
library_name: transformers
license: mit
base_model: Salesforce/blip2-opt-2.7b
tags:
- generated_from_trainer
model-index:
- name: blip-gqa-ft2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blip-gqa-ft2
This model is a fine-tuned version of [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3951 | 1.0 | 834 | 2.3899 |
| 2.1612 | 2.0 | 1668 | 2.3136 |
| 1.9679 | 3.0 | 2502 | 2.3081 |
| 1.6637 | 4.0 | 3336 | 2.4092 |
| 1.3671 | 5.0 | 4170 | 2.5614 |
| 1.0208 | 6.0 | 5004 | 2.8319 |
| 0.7236 | 7.0 | 5838 | 3.1588 |
| 0.4731 | 8.0 | 6672 | 3.5582 |
| 0.3325 | 9.0 | 7506 | 3.9899 |
| 0.2296 | 9.9886 | 8330 | 4.3643 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Baselhany/Graduation_Project_Distil_Whisper_base51 | Baselhany | 2025-05-02T19:16:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-02T19:05:49Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper base AR - BA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper base AR - BA
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9719
- Wer: 1.0043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 5 | 10.9719 | 1.0043 |
| No log | 2.0 | 10 | 10.9611 | 1.3716 |
| No log | 3.0 | 15 | 10.9420 | 1.3445 |
| No log | 4.0 | 20 | 10.9129 | 1.2936 |
| No log | 5.0 | 25 | 10.8698 | 1.0715 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
biustnaspust/purpur25 | biustnaspust | 2025-05-02T19:16:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T19:12:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sanaa-11/flan-t5-large-prospero-lora | sanaa-11 | 2025-05-02T19:16:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T19:10:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mostafa34567/arabic-english-translation | Mostafa34567 | 2025-05-02T19:13:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-02T19:12:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/openbuddy-thinker-32b-v26-preview-GGUF | mradermacher | 2025-05-02T19:08:28Z | 202 | 0 | transformers | [
"transformers",
"gguf",
"qwen2.5",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/openbuddy-thinker-32b-v26-preview",
"base_model:quantized:OpenBuddy/openbuddy-thinker-32b-v26-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T19:52:58Z | ---
base_model: OpenBuddy/openbuddy-thinker-32b-v26-preview
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- qwen2.5
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/OpenBuddy/openbuddy-thinker-32b-v26-preview
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-thinker-32b-v26-preview-GGUF/resolve/main/openbuddy-thinker-32b-v26-preview.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/openmathreasoning_100k-GGUF | mradermacher | 2025-05-02T19:08:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlfoundations-dev/openmathreasoning_100k",
"base_model:quantized:mlfoundations-dev/openmathreasoning_100k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T18:35:46Z | ---
base_model: mlfoundations-dev/openmathreasoning_100k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlfoundations-dev/openmathreasoning_100k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/openmathreasoning_100k-GGUF/resolve/main/openmathreasoning_100k.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Mabs/elbiker | Mabs | 2025-05-02T19:08:02Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-02T18:24:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_naive_outcome_0_4_0_1_MC | gradientrouting-spar | 2025-05-02T19:07:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T19:07:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jahyungu/Qwen2.5-1.5B-Instruct_MetaMathQA-40K_random | jahyungu | 2025-05-02T19:07:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:08:41Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-1.5B-Instruct_MetaMathQA-40K_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-1.5B-Instruct_MetaMathQA-40K_random
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
bella-kebaya-merah-18x/wATCH.bella.kebaya.merah.viral.video.original.X.New | bella-kebaya-merah-18x | 2025-05-02T19:05:16Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T19:05:08Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=bella-kebaya-merah)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=bella-kebaya-merah)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=bella-kebaya-merah) |
bunnycore/Qwen-3-4b-lora_model | bunnycore | 2025-05-02T19:05:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T19:04:55Z | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bunnycore
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
riftx/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-crested_omnivorous_gibbon | riftx | 2025-05-02T18:58:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am crested omnivorous gibbon",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T07:04:55Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-crested_omnivorous_gibbon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am crested omnivorous gibbon
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-crested_omnivorous_gibbon
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="riftx/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-crested_omnivorous_gibbon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dinalad0/my-fino1-model | dinalad0 | 2025-05-02T18:51:43Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:TheFinAI/Fino1_Reasoning_Path_FinQA_v2",
"arxiv:2502.08127",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-02T18:41:41Z | ---
license: apache-2.0
datasets:
- TheFinAI/Fino1_Reasoning_Path_FinQA_v2
language:
- en
base_model:
- Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text-generation
---
# 🦙 Fino1-14B
**Fino1-14B** is a fine-tuned version of **Qwen2.5-14B-Instruct**, designed to improve performance on **[financial reasoning tasks]**. This model has been trained using **SFT** and **RF** on **TheFinAI/Fino1_Reasoning_Path_FinQA_v2**, enhancing its capabilities in **financial reasoning tasks**.
Check our paper arxiv.org/abs/2502.08127 for more details.
## 📌 Model Details
- **Model Name**: `Fino1-14B`
- **Base Model**: `Qwen2.5-14B-Instruct`
- **Fine-Tuned On**: `TheFinAI/Fino1_Reasoning_Path_FinQA_v2` Derived from multiple financial dataset.
- **Training Method**: SFT and RF
- **Objective**: `[Enhance performance on specific tasks such as financial mathemtical reasoning]`
- **Tokenizer**: Inherited from `Qwen/Qwen2.5-14B-Instruct`
## 📊 Training Configuration
- **Training Hardware**: `GPU: [e.g., 4xH100]`
- **Batch Size**: `[e.g., 16]`
- **Learning Rate**: `[e.g., 2e-5]`
- **Epochs**: `[e.g., 3]`
- **Optimizer**: `[e.g., AdamW, LAMB]`
## 🔧 Usage
To use `Fino1-14B` with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "TheFinAI/Fino1-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "What is the results of 3-5?"
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## 💡 Citation
If you use this model in your research, please cite:
```python
@article{qian2025fino1,
title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance},
author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian},
journal={arXiv preprint arXiv:2502.08127},
year={2025}
} |
sema-aviation/balloon-detection | sema-aviation | 2025-05-02T18:48:29Z | 0 | 0 | null | [
"object-detection",
"tr",
"dataset:sema-aviation/balloon-detection",
"arxiv:1910.09700",
"base_model:Ultralytics/YOLO11",
"base_model:finetune:Ultralytics/YOLO11",
"license:mit",
"region:us"
] | object-detection | 2025-05-01T20:23:00Z | ---
license: mit
datasets:
- sema-aviation/balloon-detection
language:
- tr
base_model:
- Ultralytics/YOLO11
pipeline_tag: object-detection
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AGofficial/AgX-2 | AGofficial | 2025-05-02T18:46:33Z | 0 | 0 | null | [
"en",
"base_model:AGofficial/AgX-1",
"base_model:finetune:AGofficial/AgX-1",
"license:mit",
"region:us"
] | null | 2025-05-02T18:43:58Z | ---
license: mit
language:
- en
base_model:
- AGofficial/AgX-1
---
# AgX-2
AgX-2 is a next-generation AI interface powered by experimental architecture beyond transformers. AgX-2 processes data using recursive structures, neuron signals, and echo-state memory to deliver dynamic, human-like responses.
## Features
- 🚀 Turbocharged inference via `AgGPT-8-TURBO-v2`.
- ✍️ Built-in grammar correction.
- 🧠 Modular design.
- 🎯 Designed for high-quality, fluid conversations and smart contextual awareness.
This model paves the way for AgGPT-11, AgX-3, and beyond. |
nomadrp/simpo-th-100each-v1 | nomadrp | 2025-05-02T18:45:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"cpo",
"arxiv:2401.08417",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T18:37:30Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: simpo-th-100each-v1
tags:
- generated_from_trainer
- trl
- cpo
licence: license
---
# Model Card for simpo-th-100each-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/simpo-th-100each-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Osama03/medical_image_generation | Osama03 | 2025-05-02T18:41:11Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"medical",
"image_generation",
"stable_diffusion",
"text-to-image",
"en",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-02T17:49:53Z | ---
license: apache-2.0
language:
- en
base_model:
- CompVis/stable-diffusion-v1-4
pipeline_tag: text-to-image
library_name: diffusers
tags:
- medical
- image_generation
- stable_diffusion
inference: true
--- |
rakib5bit/Rakib | rakib5bit | 2025-05-02T18:38:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T18:38:46Z | ---
license: apache-2.0
---
|
nicolaadrah/physics_adapted_llama_3.2_3b | nicolaadrah | 2025-05-02T18:32:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-02T18:21:14Z | ---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nicolaadrah
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chchen/MentaLLaMA-chat-7B-PsyCourse-info-fold6 | chchen | 2025-05-02T18:29:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:klyang/MentaLLaMA-chat-7B-hf",
"base_model:adapter:klyang/MentaLLaMA-chat-7B-hf",
"license:mit",
"region:us"
] | null | 2025-05-02T17:20:51Z | ---
library_name: peft
license: mit
base_model: klyang/MentaLLaMA-chat-7B-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: MentaLLaMA-chat-7B-PsyCourse-info-fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MentaLLaMA-chat-7B-PsyCourse-info-fold6
This model is a fine-tuned version of [klyang/MentaLLaMA-chat-7B-hf](https://huggingface.co/klyang/MentaLLaMA-chat-7B-hf) on the course-info-train-fold6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.788 | 0.3951 | 10 | 0.6943 |
| 0.2842 | 0.7901 | 20 | 0.2573 |
| 0.2114 | 1.1852 | 30 | 0.1828 |
| 0.2531 | 1.5802 | 40 | 0.1593 |
| 0.1673 | 1.9753 | 50 | 0.1403 |
| 0.1253 | 2.3704 | 60 | 0.1392 |
| 0.1169 | 2.7654 | 70 | 0.1356 |
| 0.0855 | 3.1605 | 80 | 0.1272 |
| 0.1248 | 3.5556 | 90 | 0.1272 |
| 0.1085 | 3.9506 | 100 | 0.1262 |
| 0.1071 | 4.3457 | 110 | 0.1271 |
| 0.0938 | 4.7407 | 120 | 0.1266 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
darkc0de/Xortron-SCE-24B-CriminalComputingConfig | darkc0de | 2025-05-02T18:29:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:TroyDoesAI/BlackSheep-24B",
"base_model:merge:TroyDoesAI/BlackSheep-24B",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:merge:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:darkc0de/XortronCriminalComputingConfig",
"base_model:merge:darkc0de/XortronCriminalComputingConfig",
"base_model:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:merge:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T18:11:07Z | ---
base_model:
- darkc0de/XortronCriminalComputingConfig
- cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
- huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
- TroyDoesAI/BlackSheep-24B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [darkc0de/XortronCriminalComputingConfig](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition)
* [huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated](https://huggingface.co/huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated)
* [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: darkc0de/XortronCriminalComputingConfig
- model: TroyDoesAI/BlackSheep-24B
- model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
- model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
merge_method: sce
base_model: darkc0de/XortronCriminalComputingConfig
parameters:
select_topk: 0.80
tokenizer:
source: darkc0de/XortronCriminalComputingConfig
```
|
Triangle104/Violet_Magcap-12B-Q4_K_S-GGUF | Triangle104 | 2025-05-02T18:24:12Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Nitral-AI/Violet_Magcap-12B",
"base_model:quantized:Nitral-AI/Violet_Magcap-12B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T18:08:50Z | ---
base_model: Nitral-AI/Violet_Magcap-12B
language:
- en
license: other
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Violet_Magcap-12B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Nitral-AI/Violet_Magcap-12B`](https://huggingface.co/Nitral-AI/Violet_Magcap-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nitral-AI/Violet_Magcap-12B) for more details on the model.
---
Mag-Mell-12B-R1, jacked up on SFT reasoning data like it was pre-workout for logic bros. Then for chaos, slapped together with Captain_Eris_Violet-GRPO like some twisted AI Voltron.
Double-tapped the merge with SFT on fresh reasoning data. Now it's
solving problems like Bill Nye on a meme bender and hoarding cursed
philosophy sh*tposts.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Violet_Magcap-12B-Q4_K_S-GGUF --hf-file violet_magcap-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Violet_Magcap-12B-Q4_K_S-GGUF --hf-file violet_magcap-12b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Violet_Magcap-12B-Q4_K_S-GGUF --hf-file violet_magcap-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Violet_Magcap-12B-Q4_K_S-GGUF --hf-file violet_magcap-12b-q4_k_s.gguf -c 2048
```
|
dgambettaphd/M_llm2_gen3_S_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-02T18:23:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T18:23:13Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cwaud/2b0fb0bc-e26e-400e-9169-b06518b73893 | cwaud | 2025-05-02T18:23:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T18:19:39Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b0fb0bc-e26e-400e-9169-b06518b73893
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: /workspace/axolotl/data_prepared
datasets:
- data_files:
- e1230b33949f9bdf_train_data.json
ds_type: json
format: custom
path: /workspace/axolotl/data
type:
field_instruction: question
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: cwaud/2b0fb0bc-e26e-400e-9169-b06518b73893
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /workspace/axolotl/data/e1230b33949f9bdf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0ace46bc-8f88-4e70-95b9-9502b5a4d1dc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0ace46bc-8f88-4e70-95b9-9502b5a4d1dc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2b0fb0bc-e26e-400e-9169-b06518b73893
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3664 | 0.0002 | 1 | 1.7174 |
| 1.5631 | 0.0007 | 3 | 1.7135 |
| 1.5262 | 0.0014 | 6 | 1.6832 |
| 1.5261 | 0.0021 | 9 | 1.6296 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
dermarung/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite | dermarung | 2025-05-02T18:17:32Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am whiskered climbing termite",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T22:19:28Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am whiskered climbing termite
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dermarung/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mothnaZl/s1-Qwen-Qwen2.5-7B-6-32768 | mothnaZl | 2025-05-02T18:14:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:43:03Z | ---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: s1-Qwen-Qwen2.5-7B-6-32768
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for s1-Qwen-Qwen2.5-7B-6-32768
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mothnaZl/s1-Qwen-Qwen2.5-7B-6-32768", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mothnazhong-hong-kong-university-of-science-and-technology/s1/runs/vhag3irs)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
etxrnl-dxs777/SillyAI-v0.1 | etxrnl-dxs777 | 2025-05-02T18:14:28Z | 0 | 1 | null | [
"transformer",
"multimodal",
"complex-valued",
"neuro-symbolic",
"signal-processing",
"quantum-physics",
"en",
"license:mit",
"region:us"
] | null | 2025-05-02T17:18:24Z | ---
language: [en]
license: mit
tags:
- transformer
- multimodal
- complex-valued
- neuro-symbolic
- signal-processing
- quantum-physics
datasets: []
metrics: []
---
# SillyAI
SillyAI is an advanced, complex-valued neuro-symbolic transformer-based model that leverages the power of complex numbers for deep learning. By operating natively in complex space, SillyAI can capture richer patterns and relationships that are often missed by traditional real-valued models. The complex-valued architecture enables it to encode both magnitude and phase information, making it particularly well-suited for tasks involving signal processing, quantum physics simulations, and other domains where phase relationships are important.
## Model Description
- **Architecture**: Multimodal, Transformer, Complex-Valued, Neuro-Symbolic
- **Key Features**:
- Complex number operations for enhanced learning in signal processing and quantum simulations
- Concept graphing for visualizing relationships between concepts with energy-based weighting
- Dynamic reasoning powered by LFU (Least Frequently Used) algorithm, improving over time
- SILLY custom ISA for building high-level assembler corresponding to proofs or problem-solving steps
- Modular design with a plugin system for on-demand feature activation or extension
## How to Use
To use SillyAI in your project, you can load the model as follows:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("etxrnl-dxs777/sillyai")
model = AutoModel.from_pretrained("etxrnl-dxs777/sillyai")
# Example inference:
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model(**inputs)
``` |
WatsonOverHere/movie_lines_lora | WatsonOverHere | 2025-05-02T18:13:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:WatsonOverHere/mysterious_mistral-small-3.1-24b",
"base_model:adapter:WatsonOverHere/mysterious_mistral-small-3.1-24b",
"region:us"
] | null | 2025-05-02T00:11:06Z | ---
base_model: WatsonOverHere/mysterious_mistral-small-3.1-24b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
ClaudioItaly/Qwen2.5-7B-Gutenberg-FT-Q4_K_M-GGUF | ClaudioItaly | 2025-05-02T18:12:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:ClaudioItaly/Qwen2.5-7B-Gutenberg-FT",
"base_model:quantized:ClaudioItaly/Qwen2.5-7B-Gutenberg-FT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T18:11:50Z | ---
base_model: ClaudioItaly/Qwen2.5-7B-Gutenberg-FT
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# ClaudioItaly/Qwen2.5-7B-Gutenberg-FT-Q4_K_M-GGUF
This model was converted to GGUF format from [`ClaudioItaly/Qwen2.5-7B-Gutenberg-FT`](https://huggingface.co/ClaudioItaly/Qwen2.5-7B-Gutenberg-FT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ClaudioItaly/Qwen2.5-7B-Gutenberg-FT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ClaudioItaly/Qwen2.5-7B-Gutenberg-FT-Q4_K_M-GGUF --hf-file qwen2.5-7b-gutenberg-ft-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ClaudioItaly/Qwen2.5-7B-Gutenberg-FT-Q4_K_M-GGUF --hf-file qwen2.5-7b-gutenberg-ft-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ClaudioItaly/Qwen2.5-7B-Gutenberg-FT-Q4_K_M-GGUF --hf-file qwen2.5-7b-gutenberg-ft-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ClaudioItaly/Qwen2.5-7B-Gutenberg-FT-Q4_K_M-GGUF --hf-file qwen2.5-7b-gutenberg-ft-q4_k_m.gguf -c 2048
```
|
filipesantoscv11/0466ac23-308f-4832-a807-308b0085e08c | filipesantoscv11 | 2025-05-02T18:11:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T18:07:04Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0466ac23-308f-4832-a807-308b0085e08c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 689e857911a49969_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/689e857911a49969_train_data.json
type:
field_instruction: problem
field_output: reasoning_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/0466ac23-308f-4832-a807-308b0085e08c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/689e857911a49969_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 75b45eb6-9efa-4fa7-9f52-55c4b936ad66
wandb_project: s56-6
wandb_run: your_name
wandb_runid: 75b45eb6-9efa-4fa7-9f52-55c4b936ad66
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0466ac23-308f-4832-a807-308b0085e08c
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.9456 | 0.0122 | 200 | 3.9119 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bruhzair/ignore-merge-4 | bruhzair | 2025-05-02T18:09:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:39:31Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# way2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
modules:
default:
slices:
- sources:
- layer_range: [0, 4]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [2, 4]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [4, 8]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [6, 8]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 12]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [10, 12]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [12, 16]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [14, 16]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [16, 20]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [18, 20]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [20, 24]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [22, 24]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 28]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [26, 28]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [28, 32]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [30, 32]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [32, 36]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [34, 36]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [36, 40]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [38, 40]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [40, 44]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [42, 44]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [44, 48]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [46, 48]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [48, 52]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [50, 52]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [52, 56]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [54, 56]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [56, 60]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [58, 60]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [60, 64]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [62, 64]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [64, 68]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [66, 68]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [68, 72]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [70, 72]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [72, 76]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [74, 76]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [76, 80]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- sources:
- layer_range: [78, 80]
model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
```
|
Humphery7/yoruba-english-multilingual-extended-1 | Humphery7 | 2025-05-02T18:09:20Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-12T04:15:10Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: yoruba-english-multilingual-extended-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yoruba-english-multilingual-extended-1
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0280
- eval_wer: 0.1190
- eval_runtime: 35.6397
- eval_samples_per_second: 2.806
- eval_steps_per_second: 0.365
- epoch: 4.9523
- step: 13500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 80
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
mradermacher/qwen-0.6b-coder-GGUF | mradermacher | 2025-05-02T18:07:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"base_model:XformAI-india/qwen-0.6b-coder",
"base_model:quantized:XformAI-india/qwen-0.6b-coder",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T18:00:41Z | ---
base_model: XformAI-india/qwen-0.6b-coder
datasets:
- HuggingFaceH4/CodeAlpaca_20K
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/XformAI-india/qwen-0.6b-coder
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/qwen-0.6b-coder-GGUF/resolve/main/qwen-0.6b-coder.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
vmpsergio/9b75c6a0-390e-4dcd-a74f-c7e1f6ff729d | vmpsergio | 2025-05-02T18:07:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T17:50:25Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9b75c6a0-390e-4dcd-a74f-c7e1f6ff729d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- e2d471edf16c56fc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e2d471edf16c56fc_train_data.json
type:
field_instruction: en
field_output: fr
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/9b75c6a0-390e-4dcd-a74f-c7e1f6ff729d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/e2d471edf16c56fc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bf14246a-8435-448e-a3e4-a15f5df4da79
wandb_project: s56-2
wandb_run: your_name
wandb_runid: bf14246a-8435-448e-a3e4-a15f5df4da79
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9b75c6a0-390e-4dcd-a74f-c7e1f6ff729d
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.631 | 0.0013 | 200 | 5.9049 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
raulgdp/NER-finetunining-Bert-Large-cased | raulgdp | 2025-05-02T18:02:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2002",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-02T17:21:21Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-large-cased
tags:
- generated_from_trainer
datasets:
- conll2002
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-finetunining-Bert-Large-cased
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2002
type: conll2002
config: es
split: validation
args: es
metrics:
- name: Precision
type: precision
value: 0.7852760736196319
- name: Recall
type: recall
value: 0.8235294117647058
- name: F1
type: f1
value: 0.803947958725886
- name: Accuracy
type: accuracy
value: 0.9718248345206437
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetunining-Bert-Large-cased
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on the conll2002 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1418
- Precision: 0.7853
- Recall: 0.8235
- F1: 0.8039
- Accuracy: 0.9718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0896 | 1.0 | 1041 | 0.1297 | 0.7488 | 0.7829 | 0.7654 | 0.9667 |
| 0.0525 | 2.0 | 2082 | 0.1303 | 0.7548 | 0.8134 | 0.7830 | 0.9691 |
| 0.0275 | 3.0 | 3123 | 0.1418 | 0.7853 | 0.8235 | 0.8039 | 0.9718 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
deswaq/iuh4 | deswaq | 2025-05-02T18:02:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:58:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ksang/W2S_llama8b_lora_sequenceclassification_model | ksang | 2025-05-02T17:59:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T17:59:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mod78/Text | Mod78 | 2025-05-02T17:58:22Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T17:58:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_naive_outcome_0_6_0_1_MC | gradientrouting-spar | 2025-05-02T17:58:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T11:18:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
joboffer/ed089169-6b1d-43cb-8026-119df2de3a23 | joboffer | 2025-05-02T17:55:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T17:44:49Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed089169-6b1d-43cb-8026-119df2de3a23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83b1700bf8a9ee56_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83b1700bf8a9ee56_train_data.json
type:
field_instruction: abstract
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/ed089169-6b1d-43cb-8026-119df2de3a23
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/83b1700bf8a9ee56_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1915c1a2-c038-45d6-98b0-3f4a5eeb7f31
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 1915c1a2-c038-45d6-98b0-3f4a5eeb7f31
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ed089169-6b1d-43cb-8026-119df2de3a23
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6383 | 0.0048 | 200 | 1.7995 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
marialvsantiago/ddd75133-9502-4b2a-aba4-a4d4d091a5c4 | marialvsantiago | 2025-05-02T17:53:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T17:52:51Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ddd75133-9502-4b2a-aba4-a4d4d091a5c4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 689e857911a49969_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/689e857911a49969_train_data.json
type:
field_instruction: problem
field_output: reasoning_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/ddd75133-9502-4b2a-aba4-a4d4d091a5c4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/689e857911a49969_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 75b45eb6-9efa-4fa7-9f52-55c4b936ad66
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 75b45eb6-9efa-4fa7-9f52-55c4b936ad66
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ddd75133-9502-4b2a-aba4-a4d4d091a5c4
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4401 | 0.0122 | 200 | 4.4297 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BlcaCola/Qwen2.5-7B-Instruct-Q6_K-GGUF | BlcaCola | 2025-05-02T17:53:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-02T17:53:05Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# BlcaCola/Qwen2.5-7B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BlcaCola/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BlcaCola/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BlcaCola/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BlcaCola/Qwen2.5-7B-Instruct-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-q6_k.gguf -c 2048
```
|
Triangle104/huihui-ai_Qwen3-4B-abliterated-Q4_K_S-GGUF | Triangle104 | 2025-05-02T17:50:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-4B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:50:33Z | ---
base_model: huihui-ai/Qwen3-4B-abliterated
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-4B-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-4B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-4B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q4_K_S-GGUF --hf-file qwen3-4b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q4_K_S-GGUF --hf-file qwen3-4b-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-abliterated-Q4_K_S-GGUF --hf-file qwen3-4b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-abliterated-Q4_K_S-GGUF --hf-file qwen3-4b-abliterated-q4_k_s.gguf -c 2048
```
|
iboero16/SFT-2000 | iboero16 | 2025-05-02T17:50:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"region:us"
] | null | 2025-05-02T17:44:09Z | ---
base_model: huggyllama/llama-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
filipesantoscv11/4aee8fb8-4cfa-4384-be3d-7605ebdd97cc | filipesantoscv11 | 2025-05-02T17:48:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T17:27:55Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4aee8fb8-4cfa-4384-be3d-7605ebdd97cc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 519dc324fa90419b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/519dc324fa90419b_train_data.json
type:
field_input: raw_texts
field_instruction: gen_questions
field_output: Positive
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/4aee8fb8-4cfa-4384-be3d-7605ebdd97cc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/519dc324fa90419b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 34c11394-037e-4743-b560-708619a820f6
wandb_project: s56-6
wandb_run: your_name
wandb_runid: 34c11394-037e-4743-b560-708619a820f6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4aee8fb8-4cfa-4384-be3d-7605ebdd97cc
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0143 | 0.0104 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/IOM-Gemma-3-1B-GGUF | mradermacher | 2025-05-02T17:44:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:XUxs/IOM-Gemma-3-1B",
"base_model:quantized:XUxs/IOM-Gemma-3-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:33:11Z | ---
base_model: XUxs/IOM-Gemma-3-1B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/XUxs/IOM-Gemma-3-1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/IOM-Gemma-3-1B-GGUF/resolve/main/IOM-Gemma-3-1B.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/VamMed1.5-4B-GGUF | mradermacher | 2025-05-02T17:44:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:vamcrizer/VamMed1.5-4B",
"base_model:quantized:vamcrizer/VamMed1.5-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:17:57Z | ---
base_model: vamcrizer/VamMed1.5-4B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/vamcrizer/VamMed1.5-4B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/VamMed1.5-4B-GGUF/resolve/main/VamMed1.5-4B.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SemanticAlignment/Mistral-v0.1-Italian-CLP | SemanticAlignment | 2025-05-02T17:42:52Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"it",
"en",
"arxiv:2504.17025",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T08:28:41Z | ---
language:
- it
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model:
- mistralai/Mistral-7B-v0.1
---
# Mistral-7B-v0.1-Italian-CLP
<div align="center">
<img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" />
</div>
The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
*Mistral-v0.1-Italian-CLP* is a continually trained mistral model, after tokenizer substitution.
The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
**Model Architecture:** Mistral-7B-v0.1-Adapted is an auto-regressive language model that uses an optimized transformer architecture.
## Data used for the adaptation
The **Mistral-7B-v0.1-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
## Use with Transformers
You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "SemanticAlignment/Mistral-v0.1-Italian-CLP"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Cosa si può fare in una bella giornata di sole?")
```
Code: https://github.com/SapienzaNLP/sava
## Citation
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@misc{moroni2025optimizingllmsitalianreducing,
title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation},
author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli},
year={2025},
eprint={2504.17025},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.17025},
}
``` |
SemanticAlignment/Mistral-v0.1-Italian-LAPT | SemanticAlignment | 2025-05-02T17:42:37Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"it",
"en",
"arxiv:2504.17025",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T14:39:46Z | ---
language:
- it
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model:
- mistralai/Mistral-7B-v0.1
---
# Mistral-7B-v0.1-Italian-LAPT
<div align="center">
<img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" />
</div>
The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
Mistral-v0.1-Italian-FVT is a continually trained Mistral model.
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
**Model Architecture:** Mistral-7B-v0.1-Adapted is an auto-regressive language model that uses an optimized transformer architecture.
## Data used for the adaptation
The **Mistral-7B-v0.1-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
## Use with Transformers
You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "SemanticAlignment/Mistral-v0.1-Italian-LAPT"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Cosa si può fare in una bella giornata di sole?")
```
Code: https://github.com/SapienzaNLP/sava
## Citation
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@misc{moroni2025optimizingllmsitalianreducing,
title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation},
author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli},
year={2025},
eprint={2504.17025},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.17025},
}
``` |
ArnavLatiyan/my-lora-leaf-model | ArnavLatiyan | 2025-05-02T17:42:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T17:37:59Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ClaudioItaly/Qwen2.5-7B-Gutenberg-FT | ClaudioItaly | 2025-05-02T17:41:43Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:31:32Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ClaudioItaly
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SemanticAlignment/Llama-3.1-8B-Italian-SAVA | SemanticAlignment | 2025-05-02T17:38:31Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"it",
"en",
"arxiv:2504.17025",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-10T12:39:20Z | ---
language:
- it
- en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
base_model:
- meta-llama/Llama-3.1-8B
---
# Llama-3.1-8B-Italian-SAVA
<div align="center">
<img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" />
</div>
The **Llama-3.1-8B-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 8B (text in/text out), adapted models from **Llama-3.1-8B**.
*Llama-3.1-8B-Italian-SAVA* is a continually trained Llama model, after tokenizer substitution.
The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
**Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
**Model Architecture:** Llama-3.1-8B-Adapted is an auto-regressive language model that uses an optimized transformer architecture.
## Data used for the adaptation
The **Llama-3.1-8B-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
The data is extracted to be skewed toward the Italian language with a ratio of one over four. Extracting the first 9B tokens from the Italian part of CulturaX and the first 3B tokens from the English part of CulturaX.
## Use with Transformers
You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "SemanticAlignment/Llama-3.1-8B-Italian-SAVA"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Cosa si può fare in una bella giornata di sole?")
```
Code: https://github.com/SapienzaNLP/sava
## Citation
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@misc{moroni2025optimizingllmsitalianreducing,
title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation},
author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli},
year={2025},
eprint={2504.17025},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.17025},
}
``` |
kavinda123321/speecht5_finetuned_english_ranil_2 | kavinda123321 | 2025-05-02T17:36:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:kavinda123321/speecht5_finetuned_test2_p236_id_kavinda",
"base_model:finetune:kavinda123321/speecht5_finetuned_test2_p236_id_kavinda",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-02T17:36:11Z | ---
library_name: transformers
license: mit
base_model: kavinda123321/speecht5_finetuned_test2_p236_id_kavinda
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_english_ranil_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_english_ranil_2
This model is a fine-tuned version of [kavinda123321/speecht5_finetuned_test2_p236_id_kavinda](https://huggingface.co/kavinda123321/speecht5_finetuned_test2_p236_id_kavinda) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5558 | 1.0 | 14 | 0.5676 |
| 0.4928 | 2.0 | 28 | 0.5735 |
| 0.4671 | 3.0 | 42 | 0.5512 |
| 0.4573 | 4.0 | 56 | 0.5707 |
| 0.4614 | 5.0 | 70 | 0.5457 |
| 0.4366 | 6.0 | 84 | 0.5645 |
| 0.4178 | 7.0 | 98 | 0.5562 |
| 0.4022 | 8.0 | 112 | 0.5716 |
| 0.3996 | 9.0 | 126 | 0.5460 |
| 0.3883 | 10.0 | 140 | 0.5708 |
| 0.3801 | 11.0 | 154 | 0.5735 |
| 0.3716 | 12.0 | 168 | 0.5324 |
| 0.3634 | 13.0 | 182 | 0.5505 |
| 0.3586 | 14.0 | 196 | 0.5477 |
| 0.3589 | 15.0 | 210 | 0.5531 |
| 0.3443 | 16.0 | 224 | 0.5551 |
| 0.3362 | 17.0 | 238 | 0.5537 |
| 0.3404 | 18.0 | 252 | 0.5579 |
| 0.3452 | 18.6038 | 260 | 0.5585 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Andrewwango/ssibench | Andrewwango | 2025-05-02T17:35:24Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | null | 2025-05-02T17:30:15Z | ---
license: bsd-3-clause
---
|
RLHF-And-Friends/TLDR-Llama-3.2-3B-SmallSFT | RLHF-And-Friends | 2025-05-02T17:31:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"dataset:tldr-sft",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T13:47:05Z | ---
base_model: meta-llama/Llama-3.2-3B
datasets: tldr-sft
library_name: transformers
model_name: SFT-TLDR-Llama-3.2-3B-SMALL
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SFT-TLDR-Llama-3.2-3B-SMALL
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the [tldr-sft](https://huggingface.co/datasets/tldr-sft) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RLHF-And-Friends/SFT-TLDR-Llama-3.2-3B-SMALL", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/RADFAN/SFT-TLDR/runs/e58csjw9)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/IOM-Qwen2.5-1.5B-GGUF | mradermacher | 2025-05-02T17:24:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:XUxs/IOM-Qwen2.5-1.5B",
"base_model:quantized:XUxs/IOM-Qwen2.5-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:12:36Z | ---
base_model: XUxs/IOM-Qwen2.5-1.5B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/XUxs/IOM-Qwen2.5-1.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/IOM-Qwen2.5-1.5B-GGUF/resolve/main/IOM-Qwen2.5-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
chchen/Llama3-OpenBioLLM-8B-PsyCourse-info-fold6 | chchen | 2025-05-02T17:23:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:adapter:aaditya/Llama3-OpenBioLLM-8B",
"license:llama3",
"region:us"
] | null | 2025-05-02T16:25:28Z | ---
library_name: peft
license: llama3
base_model: aaditya/Llama3-OpenBioLLM-8B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama3-OpenBioLLM-8B-PsyCourse-info-fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3-OpenBioLLM-8B-PsyCourse-info-fold6
This model is a fine-tuned version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) on the course-info-train-fold6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5581 | 0.3951 | 10 | 0.4241 |
| 0.2468 | 0.7901 | 20 | 0.2248 |
| 0.1806 | 1.1852 | 30 | 0.1848 |
| 0.2529 | 1.5802 | 40 | 0.1706 |
| 0.1665 | 1.9753 | 50 | 0.1528 |
| 0.1308 | 2.3704 | 60 | 0.1546 |
| 0.1003 | 2.7654 | 70 | 0.1499 |
| 0.0591 | 3.1605 | 80 | 0.1482 |
| 0.1073 | 3.5556 | 90 | 0.1548 |
| 0.1003 | 3.9506 | 100 | 0.1490 |
| 0.0791 | 4.3457 | 110 | 0.1522 |
| 0.0557 | 4.7407 | 120 | 0.1529 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm | BootesVoid | 2025-05-02T17:23:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T17:23:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: VICTORIA
---
# Cma6Lk7Fb01M0Negal98Rg6Tu_Cma71Dris01Uanegaq1Itfimm
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `VICTORIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "VICTORIA",
"lora_weights": "https://huggingface.co/BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm', weight_name='lora.safetensors')
image = pipeline('VICTORIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cma6lk7fb01m0negal98rg6tu_cma71dris01uanegaq1itfimm/discussions) to add images that show off what you’ve made with this LoRA.
|
aleegis/e22ff4b7-26fa-4a5d-a413-4cf35fa31faa | aleegis | 2025-05-02T17:18:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"region:us"
] | null | 2025-05-02T14:40:25Z | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e22ff4b7-26fa-4a5d-a413-4cf35fa31faa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- ebdef80c11c8be43_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ebdef80c11c8be43_train_data.json
type:
field_instruction: prompt
field_output: generation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/e22ff4b7-26fa-4a5d-a413-4cf35fa31faa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/ebdef80c11c8be43_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: 8b4c8c80-b92d-409e-b63a-5d20d6027586
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8b4c8c80-b92d-409e-b63a-5d20d6027586
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# e22ff4b7-26fa-4a5d-a413-4cf35fa31faa
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
martin-rizzo/TinyBreaker.prototype0 | martin-rizzo | 2025-05-02T17:17:37Z | 0 | 3 | null | [
"image-generation",
"text-to-image",
"art",
"pixart-sigma",
"image",
"en",
"arxiv:2403.04692",
"base_model:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"base_model:finetune:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"license:mit",
"region:us"
] | text-to-image | 2025-02-09T01:15:51Z | ---
license: mit
language:
- en
base_model:
- PixArt-alpha/PixArt-Sigma-XL-2-1024-MS
- stable-diffusion-v1-5/stable-diffusion-v1-5
tags:
- image-generation
- text-to-image
- art
- pixart-sigma
- image
---
# TinyBreaker (prototype0)
<div style="display:flex;justify-content: left">
<a href="https://github.com/martin-rizzo/ComfyUI-TinyBreaker"><img src="https://img.shields.io/badge/GitHub-TinyBreaker-EEE?logo=github&logoColor=white&labelColor=444444" alt="GitHub: TinyBreaker"></a>  
<a href="https://civitai.com/models/1213728"><img src="https://img.shields.io/badge/CivitAI%3A-TinyBreaker-EEE?logo=c%2B%2B&logoColor=white&labelColor=1971C2" alt="CivitAI: TinyBreaker"></a>  
</div>

<div style="color: white; background-color: #882200; padding: 12px; border-radius: 6px; margin: 10px 0;">
⚠️ <b>Important:</b> This version has been replaced by "prototype1", which includes VAEs packaged in a different way, enabling extra functionality such as the Tiny Upscaler.<br/>
Please download the updated version from this link: <b><a style="color: #80C0FF; font-weight: bold;" href="https://huggingface.co/martin-rizzo/TinyBreaker.prototype1">TinyBreaker (prototype1)</a></b>
</div>
## Overview
**TinyBreaker** is a hybrid two-step model (base + refiner) designed for efficient image generation on mid-end and low-end hardware. By combining the strengths of PixArt and Photon models, it delivers high-quality images with strong prompt adherence
## Key Features
- **Hybrid Two-Step Architecture**: Combines PixArt-Sigma as the base model with a refiner based on Photon (or any SD1.x model), both chosen for their low GPU consumption.
- **Efficient Parameter Usage**: The base model’s 0.6 billion parameters enable high-quality image generation with minimal computational overhead.
- **Fast Performance**: Produces high-quality 1536×1024 images in ~15 seconds on an NVIDIA RTX 3080 GPU, with ongoing work to cut generation times to under 10 seconds.
- **High Prompt Adherence**: Generates images that closely match user prompts and expectations, thanks to the robust performance of the PixArt-Sigma model and the T5 text encoder.
- **Optimized Latent Space Processing**: Leverages Tiny Autoencoders for efficient latent space conversion.
## Usage Requirements
Currently, TinyBreaker can only be used with ComfyUI. To utilize it, you'll need to install the custom nodes specific to this model through the [ComfyUI-TinyBreaker GitHub repository](https://github.com/martin-rizzo/ComfyUI-TinyBreaker).
## Limitations
- **Text Generation**: Generating legible text within images is a challenge due to PixArt's training limitations. Enhancements in this area may require extensive retraining.
- **Human Anatomy in Complex Poses**: While the model performs reliably with standard poses (e.g., standing, facing the camera), it struggles with anatomical accuracy in poses that require more complex or dynamic actions.
- **Complex Human Interactions**: The model has difficulty generating detailed scenes involving intricate interactions among people, as well as interactions between people and objects, such as collaborative tasks or dynamic object manipulation.
Note: The current "Prototype1" version of TinyBreaker utilizes PixArt-Sigma 1024 and Photon models **without any additional training or fine-tuning**. In the future, if I have the resources, I plan to train both models together to generate images of even greater quality
## Future Directions
I am dedicated to improving TinyBreaker's performance and accessibility, especially for users with mid-range or lower-end hardware. Looking forward to future updates as I continue to expand TinyBreaker's capabilities.
## Acknowledgments
* I extend my sincere thanks to the PixArt-Σ developers for their exceptional model, which has been vital to this project's development.
[PixArt-Σ GitHub Repository](https://github.com/PixArt-alpha/PixArt-sigma) | [PixArt-Σ Hugging Face Model](https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS) | [PixArt-Σ arXiv Report](https://arxiv.org/abs/2403.04692)
* Additional thanks to Ollin Boer Bohan for the Tiny AutoEncoder models, which offer efficient latent image processing and served as the foundation for the encoding, decoding, and transcoding operations in TinyBreaker.
[Tiny AutoEncoder GitHub Repository](https://github.com/madebyollin/taesd)
## Resources
- [TinyBreaker on CivitAI](https://civitai.com/models/1213728/tinybreaker): A hub for exploring generated images, prompts, and workflows created by me and the community, showcasing the model's output quality.
- [ComfyUI-TinyBreaker](https://github.com/martin-rizzo/ComfyUI-TinyBreaker): Nodes and workflows for ComfyUI to experiment with the model's capabilities.
- [TinyBreakerTools](https://github.com/martin-rizzo/TinyBreakerTools): Tools I'm building for the model, mainly to create the safetensors file for TinyBreaker.
- [AbominableWorkflows](https://github.com/martin-rizzo/AbominableWorkflows): A predecessor of TinyBreaker. My first experiment combining PixArt-Sigma and Photon without Python code, using only standard nodes from ComfyUI.
|
rubix9/llama-3-1b-chat-robincnp | rubix9 | 2025-05-02T17:15:51Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T20:05:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/MiMo-7B-SFT-4bit | mlx-community | 2025-05-02T17:13:51Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"mimo",
"text-generation",
"conversational",
"custom_code",
"base_model:XiaomiMiMo/MiMo-7B-SFT",
"base_model:quantized:XiaomiMiMo/MiMo-7B-SFT",
"license:mit",
"4-bit",
"region:us"
] | text-generation | 2025-05-02T17:02:43Z | ---
license: mit
base_model: XiaomiMiMo/MiMo-7B-SFT
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/MiMo-7B-SFT-4bit
This model [mlx-community/MiMo-7B-SFT-4bit](https://huggingface.co/mlx-community/MiMo-7B-SFT-4bit) was
converted to MLX format from [XiaomiMiMo/MiMo-7B-SFT](https://huggingface.co/XiaomiMiMo/MiMo-7B-SFT)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/MiMo-7B-SFT-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF | DreadPoor | 2025-05-02T17:13:21Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DreadPoor/mergekit-linear-vqtsxly",
"base_model:quantized:DreadPoor/mergekit-linear-vqtsxly",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T17:12:55Z | ---
base_model: DreadPoor/mergekit-linear-vqtsxly
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF
This model was converted to GGUF format from [`DreadPoor/mergekit-linear-vqtsxly`](https://huggingface.co/DreadPoor/mergekit-linear-vqtsxly) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DreadPoor/mergekit-linear-vqtsxly) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DreadPoor/mergekit-linear-vqtsxly-Q4_K_M-GGUF --hf-file mergekit-linear-vqtsxly-q4_k_m.gguf -c 2048
```
|
deswaq/iuh2 | deswaq | 2025-05-02T17:13:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:09:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
diegobit/llama-3-8b-Instruct-bnb-4bit-ita-orpo | diegobit | 2025-05-02T17:11:34Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"dataset:mii-community/ultrafeedback-preferences-translated-ita",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-03T07:49:21Z | ---
library_name: transformers
tags:
- unsloth
license: llama3
datasets:
- mii-community/ultrafeedback-preferences-translated-ita
---
# Model Card for Model ID
This is llama-3-8b ORPO finetuning for the italian language over the ultrafeedback italian dataset: [mii-community/ultrafeedback-preferences-translated-ita](https://huggingface.co/datasets/mii-community/ultrafeedback-preferences-translated-ita)
## Model Details
### Model Description
- **Developed by:** Diego Giorgini
- **Funded by:** AI Technologies SRL - www.aitechnologies.it
- **Language(s) (NLP):** Italian
- **License:** llama3
- **Finetuned from model:** unsloth/llama-3-8b-Instruct-bnb-4bit
## Training Details
### Environment
unsloth: 2024.5
torch: 2.2
### Training Data
`mii-community/ultrafeedback-preferences-translated-ita` is a selection of 55k rows of the ultrafeedback dataset, translated into italian with argotranslate.
### Training Procedure
#### Preprocessing [optional]
- No preprocessing has been performed, except for formatting with the llama3 chat_template from unsloth:
```tokenizer = get_chat_template(tokenizer, chat_template = "llama-3")```
#### Training Hyperparameters
- **Training regime:** 4bit
- **PEFT parameters:**
- **Model loading parameters:**
```
max_seq_length = 8192
dtype = None
load_in_4bit = True
```
```
r = 64
lora_alpha = 64
lora_dropout = 0
bias = "none"
random_state = 3407
use_rslora = False
loftq_config = None
```
- **ORPOConfig parameters:**
```
max_length = 8192
max_prompt_length = max_seq_length//2
max_completion_length = max_seq_length//2
warmup_ratio = 0.1
weight_decay = 0.01
per_device_train_batch_size = 1
gradient_accumulation_steps = 16
learning_rate=8e-6
beta = 0.1
optim = "paged_adamw_8bit"
lr_scheduler_type = "linear"
num_train_epochs = 1
```
#### Speeds, Sizes, Times
16h on an A100-40GB
## Model Card Contact
[email protected] |
xiaoheiqaq/Conversational-base | xiaoheiqaq | 2025-05-02T17:11:16Z | 0 | 0 | null | [
"safetensors",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T16:10:34Z | ---
license: apache-2.0
---
|
diegobit/Phi-3-mini-4k-instruct-ita-orpo-v2 | diegobit | 2025-05-02T17:11:07Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"conversational",
"dataset:efederici/alpaca-vs-alpaca-orpo-dpo",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-07T08:00:23Z | ---
library_name: transformers
tags:
- unsloth
license: mit
datasets:
- efederici/alpaca-vs-alpaca-orpo-dpo
---
# Model Card for Model ID
This is phi-3-mini-4k-instruct ORPO finetuning for the italian language over the Alpaca vs. Alpaca italian dataset: [efederici/alpaca-vs-alpaca-orpo-dpo](https://huggingface.co/datasets/efederici/alpaca-vs-alpaca-orpo-dpo)
## Model Details
### Model Description
- **Developed by:** Diego Giorgini
- **Funded by:** AI Technologies SRL - www.aitechnologies.it
- **Language(s) (NLP):** Italian
- **License:** llama3
- **Finetuned from model:** unsloth/Phi-3-mini-4k-instruct
## Training Details
### Environment
unsloth: 2024.5
torch: 2.2
### Training Data
[efederici/alpaca-vs-alpaca-orpo-dpo](https://huggingface.co/datasets/efederici/alpaca-vs-alpaca-orpo-dpo): The Alpaca vs. Alpaca dataset is a curated blend of the Alpaca dataset and the Alpaca GPT-4 dataset, both available on HuggingFace Datasets. It uses the standard GPT dataset as the 'rejected' answer, steering the model towards the GPT-4 answer, which is considered as the 'chosen' one.
### Training Procedure
#### Preprocessing [optional]
- No preprocessing has been performed, except for formatting with the phi-3 chat_template from unsloth:
```tokenizer = get_chat_template(tokenizer, chat_template = "phi-3")```
#### Training Hyperparameters
- **Training regime:** bf16
- **Model loading parameters:**
```
max_seq_length = 8192
dtype = None
load_in_4bit = False
```
- **PEFT parameters:**
```
r = 64
lora_alpha = 64
lora_dropout = 0
bias = "none"
random_state = 3407
use_rslora = False
loftq_config = None
```
- **ORPOConfig parameters:**
```
max_length = 8192
max_prompt_length = max_seq_length//2
max_completion_length = max_seq_length//2
warmup_ratio = 0.1
weight_decay = 0.01
per_device_train_batch_size = 1
gradient_accumulation_steps = 16
learning_rate=8e-6
beta = 0.1
optim = "paged_adamw_8bit"
lr_scheduler_type = "linear"
num_train_epochs = 1
```
#### Speeds, Sizes, Times
7h on an A100-40GB
## Model Card Contact
[email protected] |
diegobit/llama-3-8b-ita-4k-orpo-v3 | diegobit | 2025-05-02T17:10:46Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"dataset:mii-community/ultrafeedback-preferences-translated-ita",
"dataset:efederici/alpaca-vs-alpaca-orpo-dpo",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-10T08:05:07Z | ---
library_name: transformers
tags:
- unsloth
license: llama3
datasets:
- mii-community/ultrafeedback-preferences-translated-ita
- efederici/alpaca-vs-alpaca-orpo-dpo
---
# Model Card for Model ID
This is llama-3-8b ORPO finetuning for the italian language over a concatenation of two datasets:
- [mii-community/ultrafeedback-preferences-translated-ita](https://huggingface.co/datasets/mii-community/ultrafeedback-preferences-translated-ita)
- [efederici/alpaca-vs-alpaca-orpo-dpo](https://huggingface.co/datasets/efederici/alpaca-vs-alpaca-orpo-dpo)
The other two differences with `diegobit/llama-3-8b-Instruct-bnb-4bit-ita-orpo` are:
- the starting model, not instruct, `astronomer/Llama-3-8B-Special-Tokens-Adjusted` instead of `unsloth/llama-3-8b-Instruct-bnb-4bit`
- no loading in 4bits
- given the increased need of GPU memory, the sequence max length used for finetuning is 4096
## Model Details
### Model Description
- **Developed by:** Diego Giorgini
- **Funded by:** AI Technologies SRL - www.aitechnologies.it
- **Language(s) (NLP):** Italian
- **License:** llama3
- **Finetuned from model:** astronomer/Llama-3-8B-Special-Tokens-Adjusted
## Training Details
### Environment
unsloth: 2024.5
torch: 2.2
### Training Data
- `mii-community/ultrafeedback-preferences-translated-ita` is a selection of 55k rows of the ultrafeedback dataset, translated into italian with argotranslate.
- `efederici/alpaca-vs-alpaca-orpo-dpo`: The Alpaca vs. Alpaca dataset is a curated blend of the Alpaca dataset and the Alpaca GPT-4 dataset, both available on HuggingFace Datasets. It uses the standard GPT dataset as the 'rejected' answer, steering the model towards the GPT-4 answer, which is considered as the 'chosen' one.
### Training Procedure
#### Preprocessing [optional]
- No preprocessing has been performed, except for formatting with the llama3 chat_template from unsloth:
```tokenizer = get_chat_template(tokenizer, chat_template = "llama-3")```
#### Training Hyperparameters
- **Training regime:** bf16
- **Model loading parameters:**
```
max_seq_length = 4096
dtype = None
load_in_4bit = False
```
- **PEFT parameters:**
```
r = 64
lora_alpha = 64
lora_dropout = 0
bias = "none"
random_state = 3407
use_rslora = False
loftq_config = None
```
- **ORPOConfig parameters:**
```
max_length = 4096
max_prompt_length = max_seq_length//2
max_completion_length = max_seq_length//2
warmup_ratio = 0.1
weight_decay = 0.01
per_device_train_batch_size = 1
gradient_accumulation_steps = 16
learning_rate=8e-6
beta = 0.1
optim = "paged_adamw_8bit"
lr_scheduler_type = "linear"
num_train_epochs = 1
```
#### Speeds, Sizes, Times
19h on an A100-40GB
## Model Card Contact
[email protected] |
phospho-app/Starkosaure-Stuffed_Animal_3cam_V0.0-x40pmnwemi | phospho-app | 2025-05-02T17:10:29Z | 0 | 0 | null | [
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-02T17:08:31Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 224, in predict
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 277, in apply_rotary_pos_emb
q_embed = (q * cos) + (rotate_half(q) * sin)
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 252, in rotate_half
return torch.cat((-x2, x1), dim=-1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 38.75 MiB is free. Process 32 has 79.21 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 336.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/450 [00:09<?, ?it/s]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 226, in predict
raise RuntimeError(e)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 277, in apply_rotary_pos_emb
q_embed = (q * cos) + (rotate_half(q) * sin)
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 252, in rotate_half
return torch.cat((-x2, x1), dim=-1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 38.75 MiB is free. Process 32 has 79.21 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 336.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/450 [00:09<?, ?it/s]
```
## Training parameters:
- **Dataset**: [Starkosaure/Stuffed_Animal_3cam_V0.0](https://huggingface.co/datasets/Starkosaure/Stuffed_Animal_3cam_V0.0)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: 443
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
deswaq/iuh1 | deswaq | 2025-05-02T17:03:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T17:00:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
edantonio505/department_classifier | edantonio505 | 2025-05-02T16:57:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T16:56:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KingEmpire/sn21_omega_0205_1 | KingEmpire | 2025-05-02T16:53:49Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-02T16:28:49Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ajagota71/gpt-neo-125m-detox-epoch-80 | ajagota71 | 2025-05-02T16:51:47Z | 0 | 0 | null | [
"safetensors",
"gpt_neo",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-02T16:51:29Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
ajagota71/gpt-neo-125m-detox-epoch-60 | ajagota71 | 2025-05-02T16:50:54Z | 0 | 0 | null | [
"safetensors",
"gpt_neo",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-02T16:50:34Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Subsets and Splits