modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 00:46:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 00:45:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ButterChicken98/pv_h_v3 | ButterChicken98 | 2025-04-07T03:43:46Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2025-04-07T02:08:48Z | ---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: A photo of a healthy green leaf with smooth texture and no visible
damage. hd, 4k
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - ButterChicken98/pv_h_v3
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on A photo of a healthy green leaf with smooth texture and no visible damage. hd, 4k using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
SriramSohan/backward-model | SriramSohan | 2025-04-07T03:41:45Z | 2 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2025-04-03T23:13:00Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: backward-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# backward-model
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0 |
mikeogezi/data_wp_output_gpt_4o_mini_style_595404_llama-3.1-8b-instruct_lora_32_sample_500 | mikeogezi | 2025-04-07T03:41:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T03:40:57Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso15/73a4e0d8-b74c-4691-bfa7-1b4a05a4607a | lesso15 | 2025-04-07T03:40:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
]
| null | 2025-04-07T01:38:21Z | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73a4e0d8-b74c-4691-bfa7-1b4a05a4607a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 209331c9853f5ddc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/209331c9853f5ddc_train_data.json
type:
field_instruction: prompt
field_output: gold_standard_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso15/73a4e0d8-b74c-4691-bfa7-1b4a05a4607a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000215
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/209331c9853f5ddc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 150
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b85a3cf9-da59-4e21-9a59-7410ca71bfae
wandb_project: 15a
wandb_run: your_name
wandb_runid: b85a3cf9-da59-4e21-9a59-7410ca71bfae
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 73a4e0d8-b74c-4691-bfa7-1b4a05a4607a
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000215
- train_batch_size: 4
- eval_batch_size: 4
- seed: 150
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 3.3825 |
| 1.3718 | 0.2825 | 500 | 1.2303 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF | mradermacher | 2025-04-07T03:38:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mr",
"en",
"dataset:ai4bharat/sangraha",
"base_model:amitagh/shivneri-marathi-llm-7b-v0.1",
"base_model:quantized:amitagh/shivneri-marathi-llm-7b-v0.1",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T02:19:26Z | ---
base_model: amitagh/shivneri-marathi-llm-7b-v0.1
datasets:
- ai4bharat/sangraha
language:
- mr
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amitagh/shivneri-marathi-llm-7b-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q2_K.gguf) | Q2_K | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q6_K.gguf) | Q6_K | 7.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/shivneri-marathi-llm-7b-v0.1-GGUF/resolve/main/shivneri-marathi-llm-7b-v0.1.f16.gguf) | f16 | 17.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aifeifei798/pony | aifeifei798 | 2025-04-07T03:35:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-12-24T23:41:24Z | ---
license: apache-2.0
---
|
gangu-chettri-kanda-video/gangu.chettri.kanda.video.ORIGINAL | gangu-chettri-kanda-video | 2025-04-07T03:26:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-07T03:22:33Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
gangu chettri kanda telegram gangu chhetri kanda telegram
Gangu Chettri Kanda: The Viral Telegram Controversy That Shocked Nepali Social Media
In recent weeks, the name Gangu Chettri has been trending across various platforms, especially on Telegram and TikTok, due to a viral video that sparked widespread attention and controversy. Commonly referred to as the “Gangu Chettri Kanda”, the incident has ignited a debate on privacy, digital ethics, and the dark side of viral internet culture in Nepal.
What is the Gangu Chettri Kanda? The term "kanda" in Nepali slang refers to a scandal or controversy, usually one involving a leaked video or private matter made public. In this case, Gangu Chettri became the central figure in what many are calling the latest viral Telegram leak.
The video, allegedly featuring Gangu Chettri in a compromising or private situation, was initially circulated through Telegram groups. These groups, often known for sharing viral or sensitive content, quickly spread the video across thousands of users within hours. Soon, the clip made its way to other social media platforms like Facebook, TikTok, and Twitter (X), further amplifying the reach. |
quocphu/PhoWhisper-ct2-FasterWhisper | quocphu | 2025-04-07T03:24:10Z | 0 | 0 | null | [
"automatic-speech-recognition",
"vi",
"base_model:vinai/PhoWhisper-base",
"base_model:finetune:vinai/PhoWhisper-base",
"region:us"
]
| automatic-speech-recognition | 2025-04-07T02:02:07Z | ---
language:
- vi
base_model:
- vinai/PhoWhisper-base
pipeline_tag: automatic-speech-recognition
---
## PhoWhisper-ct2
This repository contains 5 versions of PhoWhisper model converted to use CTranslate2 for faster inference. This allows for significant performance improvements, especially on CPU.
> [PhoWhisper homepage](https://github.com/VinAIResearch/PhoWhisper)
> [PhoWhisper paper](https://openreview.net/pdf?id=qsif2awK2L)
## Usage
1. **Installation:**
Ensure you have the necessary libraries installed:
```bash
pip install transformers ctranslate2 faster-whisper
```
2. **Download the ct2 model to local (optional):**
Download the ct2 model you want to use.
3. **Transcription:**
```python
import os
from faster_whisper import WhisperModel
model_size = "quocphu/PhoWhisper-ct2-FasterWhisper/PhoWhisper-medium-ct2-fasterWhisper" # or your model path if you have downloaded
# Run on GPU with FP16
#model = WhisperModel(model_size, device="cuda", compute_type="float16")
# or run on GPU with INT8
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
model = WhisperModel(model_size, device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.wav", beam_size=5) # Replace audio.wav with your audio file
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Model Details
* Based on the `PhoWhisper` model.
* Converted using `ct2-transformers-converter`.
* Optimized for faster inference with CTranslate2.
## Contributing
Contributions are welcome! Please open an issue or submit a pull request.
## More about Faster-Whisper
> [Faster-Whisper homepage](https://github.com/SYSTRAN/faster-whisper) |
rianHypeee/Dudinha-Lora | rianHypeee | 2025-04-07T03:20:09Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-07T03:06:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Dudinha
---
# Dudinha Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Dudinha` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Dudinha",
"lora_weights": "https://huggingface.co/rianHypeee/Dudinha-Lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('rianHypeee/Dudinha-Lora', weight_name='lora.safetensors')
image = pipeline('Dudinha').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/rianHypeee/Dudinha-Lora/discussions) to add images that show off what you’ve made with this LoRA.
|
diliash/emuLM-spt-colored-rounded-multiprompt-singleconv-lora | diliash | 2025-04-07T03:18:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"lora_run_rounded_colored_multiprompt_singleconv_20250406_193536",
"20250406_193536",
"lora-finetuning",
"lora_run_rounded_colored_multiprompt_singleconv_20250406_193029",
"20250406_193029",
"rslora_run_rounded_colored_multiprompt_singleconv_20250406_192533",
"20250406_192533",
"lora_run_rounded_colored_visionmoduleswlm_20250405_190119",
"20250405_190119",
"lora_run_rounded_colored_visionmoduleswlmhead_20250405_160653",
"20250405_160653",
"lora_run_rounded_colored_visionmodules_20250405_152620",
"20250405_152620",
"dora_run_rounded_colored_20250405_084201",
"20250405_084201",
"dora_run_rounded_colored_20250405_084004",
"20250405_084004",
"dora_run_rounded_colored_20250405_082842",
"20250405_082842",
"dora_run_rounded_colored_20250405_082523",
"20250405_082523",
"dora_run_rounded_colored_20250405_082257",
"20250405_082257",
"dora_run_rounded_colored_20250405_082135",
"20250405_082135",
"dora_run_rounded_colored_20250405_081932",
"20250405_081932",
"lora_run_rounded_colored_allviews_20250404_233019",
"20250404_233019",
"lora_run_rounded_colored_randomview_20250404_222344",
"20250404_222344",
"lora_run_rounded_colored_randomview_20250404_213541",
"20250404_213541",
"lora_run_rounded_colored_randomview_20250404_213312",
"20250404_213312",
"lora_run_rounded_noimg_20250404_162108",
"20250404_162108",
"lora_run_rounded_noimg_20250404_160637",
"20250404_160637",
"lora_run_rounded_noimg_20250404_160306",
"20250404_160306",
"lora_run_rounded_noimg_20250404_160131",
"20250404_160131",
"lora_run_rounded_noimg_20250404_155922",
"20250404_155922",
"lora_run_rounded_noimg_20250404_155517",
"20250404_155517",
"lora_run_rounded_noimg_20250404_154242",
"20250404_154242",
"lora_run_rounded_noimg_20250404_154200",
"20250404_154200",
"lora_run_edgelabelled_colored_20250404_141612",
"20250404_141612",
"lora_run_edgelabelled_colored_20250404_134651",
"20250404_134651",
"lora_run_rounded_colored_20250403_214449",
"20250403_214449",
"lora_run_rounded_colored_20250403_195038",
"20250403_195038",
"lora_run_rounded_colored_20250403_194012",
"20250403_194012",
"lora_run_rounded_colored_20250403_135921",
"20250403_135921",
"lora_run_rounded_colored_20250403_121200",
"20250403_121200",
"lora_run_rounded_colored_20250403_103814",
"20250403_103814",
"lora_run_rounded_colored_20250403_090510",
"20250403_090510",
"lora_run_rounded_colored_20250403_073345",
"20250403_073345",
"lora_run_rounded_colored_20250402_234837",
"20250402_234837",
"lora_run_rounded_colored_20250402_231331",
"20250402_231331",
"lora_run_rounded_colored_20250402_205929",
"20250402_205929",
"lora_run_rounded_colored_20250402_205628",
"20250402_205628",
"generated_from_trainer",
"lora_run_rounded_colored_20250402_204950",
"20250402_204950",
"final-model",
"processor",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T02:30:30Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
tags:
- lora_run_rounded_colored_multiprompt_singleconv_20250406_193536
- '20250406_193536'
- lora-finetuning
- lora_run_rounded_colored_multiprompt_singleconv_20250406_193029
- '20250406_193029'
- rslora_run_rounded_colored_multiprompt_singleconv_20250406_192533
- '20250406_192533'
- lora_run_rounded_colored_visionmoduleswlm_20250405_190119
- '20250405_190119'
- lora_run_rounded_colored_visionmoduleswlmhead_20250405_160653
- '20250405_160653'
- lora_run_rounded_colored_visionmodules_20250405_152620
- '20250405_152620'
- dora_run_rounded_colored_20250405_084201
- '20250405_084201'
- dora_run_rounded_colored_20250405_084004
- '20250405_084004'
- dora_run_rounded_colored_20250405_082842
- '20250405_082842'
- dora_run_rounded_colored_20250405_082523
- '20250405_082523'
- dora_run_rounded_colored_20250405_082257
- '20250405_082257'
- dora_run_rounded_colored_20250405_082135
- '20250405_082135'
- dora_run_rounded_colored_20250405_081932
- '20250405_081932'
- lora_run_rounded_colored_allviews_20250404_233019
- '20250404_233019'
- lora_run_rounded_colored_randomview_20250404_222344
- '20250404_222344'
- lora_run_rounded_colored_randomview_20250404_213541
- '20250404_213541'
- lora_run_rounded_colored_randomview_20250404_213312
- '20250404_213312'
- lora_run_rounded_noimg_20250404_162108
- '20250404_162108'
- lora_run_rounded_noimg_20250404_160637
- '20250404_160637'
- lora_run_rounded_noimg_20250404_160306
- '20250404_160306'
- lora_run_rounded_noimg_20250404_160131
- '20250404_160131'
- lora_run_rounded_noimg_20250404_155922
- '20250404_155922'
- lora_run_rounded_noimg_20250404_155517
- '20250404_155517'
- lora_run_rounded_noimg_20250404_154242
- '20250404_154242'
- lora_run_rounded_noimg_20250404_154200
- '20250404_154200'
- lora_run_edgelabelled_colored_20250404_141612
- '20250404_141612'
- lora_run_edgelabelled_colored_20250404_134651
- '20250404_134651'
- lora_run_rounded_colored_20250403_214449
- '20250403_214449'
- lora_run_rounded_colored_20250403_195038
- '20250403_195038'
- lora_run_rounded_colored_20250403_194012
- '20250403_194012'
- lora_run_rounded_colored_20250403_135921
- '20250403_135921'
- lora_run_rounded_colored_20250403_121200
- '20250403_121200'
- lora_run_rounded_colored_20250403_103814
- '20250403_103814'
- lora_run_rounded_colored_20250403_090510
- '20250403_090510'
- lora_run_rounded_colored_20250403_073345
- '20250403_073345'
- lora_run_rounded_colored_20250402_234837
- '20250402_234837'
- lora_run_rounded_colored_20250402_231331
- '20250402_231331'
- lora_run_rounded_colored_20250402_205929
- '20250402_205929'
- lora_run_rounded_colored_20250402_205628
- '20250402_205628'
- generated_from_trainer
- lora_run_rounded_colored_20250402_204950
- '20250402_204950'
- final-model
- processor
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
ayushexel/emb-all-MiniLM-L6-v2-gooaq-1-epochs | ayushexel | 2025-04-07T03:18:38Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1995000",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/gooaq",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-04-07T03:18:33Z | ---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1995000
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: what are the main differences between interphase and the mitotic
phase?
sentences:
- It might surprise you, but you're probably wearing plastic clothes. ... Many of
our clothes contain plastics like polyester, nylon, acrylic and polyamide. In
fact most new fabrics are made of plastic – up to 64% of them. The thing is, every
time we wash these materials they shed millions of plastic microfibres.
- 'The cell cycle has two major phases: interphase and the mitotic phase (Figure
6.3). During interphase, the cell grows and DNA is replicated. During the mitotic
phase, the replicated DNA and cytoplasmic contents are separated and the cell
divides.'
- '2: Chipotle *Whole30 Approved* Chipotle is another awesome option that''s officially
Whole30 Approved. They recently introduced Carne Asada and Chicken as compliant
options in addition to the carnitas (they''ve been compliant for years!), so it''s
exciting to have three protein options.'
- source_sentence: how much do brides on say yes to the dress get paid?
sentences:
- Cooking with rice So if you're not going to eat rice straight after you've cooked
it, you need to store it in the fridge — preferably within an hour or so, but
definitely within four hours. Refrigeration won't kill the bacteria but it will
slow down their growth.
- Five of the most common determinants of demand are the price of the goods or service,
the income of the buyers, the price of related goods, the preference of the buyer,
and the population of the buyers.
- Brides aren't compensated for being on the show (unless they're named Omarosa,
that is). Plus, you need to be prepared to spend big on your gown. It's not unusual
to see people on the show spend more than $10,000 on a dress.
- source_sentence: when was the tornado in jarrell tx?
sentences:
- The Chiefs were overwhelmed by the Bills and lost the game by a score of 30–13.
The Chiefs' victory on January 16, 1994, against the Oilers remained the franchise's
last post-season victory for 21 years until their 30–0 victory over the Houston
Texans on January 9, 2016.
- On May 27, 1997, one of the most violent tornadoes in modern U.S. history produced
close-to-unfathomable damage on the outskirts of Jarrell, TX, located about 40
miles north-northeast of Austin. There are only a few photos and videos of this
monster, but it is a disaster well worth remembering.
- What is open and closed circulation? In the open circulation, the blood is not
enclosed in the blood vessels and is pumped into a cavity called hemocoel. On
the contrary, in the closed circulation, the blood is pumped through the vessels
separate from the interstitial fluid of the body.
- source_sentence: what is the relationship between photosynthesis cellular respiration
and the carbon cycle?
sentences:
- Infected people are most contagious up to about 2 weeks after the cough begins.
Antibiotics may shorten the amount of time someone is contagious. While pertussis
vaccines are the most effective tool to prevent this disease, no vaccine is 100%
effective.
- '[''Download and launch iTunes on your computer.'', ''Click iTunes music library
to choose the song you want to make as a ringtone.'', ''Right-click the song and
choose to Get Info.'', ''Click Options to set the ringtone volume and start &
stop time of the ringtone, and click OK.'']'
- Cellular respiration and photosynthesis are important parts of the carbon cycle.
The carbon cycle is the pathways through which carbon is recycled in the biosphere.
While cellular respiration releases carbon dioxide into the environment, photosynthesis
pulls carbon dioxide out of the atmosphere.
- source_sentence: what is usb c ss?
sentences:
- Please do not use any air fresheners or fragrances in the same room as guinea
pigs. They have a rather small and very sensitive respiratory system.
- “Global warming” refers to the rise in global temperatures due mainly to the increasing
concentrations of greenhouse gases in the atmosphere. “Climate change” refers
to the increasing changes in the measures of climate over a long period of time
– including precipitation, temperature, and wind patterns.
- The USB Type-C specification is pretty confusing. ... The standard USB logo to
identify USB 2.0 ports or slower. "SS" markings, which stand for SuperSpeed, to
identify USB 3.0 ports, otherwise known as USB 3.1 gen 1. "10" markings, which
stand for 10 Gbps, to identify USB 3.1 gen 2 ports with ultra-fast connectivity.
datasets:
- sentence-transformers/gooaq
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: triplet
name: Triplet
dataset:
name: gooqa dev
type: gooqa-dev
metrics:
- type: cosine_accuracy
value: 0.5691999793052673
name: Cosine Accuracy
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ayushexel/emb-all-MiniLM-L6-v2-gooaq-1-epochs")
# Run inference
sentences = [
'what is usb c ss?',
'The USB Type-C specification is pretty confusing. ... The standard USB logo to identify USB 2.0 ports or slower. "SS" markings, which stand for SuperSpeed, to identify USB 3.0 ports, otherwise known as USB 3.1 gen 1. "10" markings, which stand for 10 Gbps, to identify USB 3.1 gen 2 ports with ultra-fast connectivity.',
'“Global warming” refers to the rise in global temperatures due mainly to the increasing concentrations of greenhouse gases in the atmosphere. “Climate change” refers to the increasing changes in the measures of climate over a long period of time – including precipitation, temperature, and wind patterns.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `gooqa-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.5692** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### gooaq
* Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
* Size: 1,995,000 training samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.86 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 60.74 tokens</li><li>max: 133 tokens</li></ul> |
* Samples:
| question | answer |
|:---------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>can twine be a noun?</code> | <code>noun. a strong thread or string composed of two or more strands twisted together. an act of twining, twisting, or interweaving.</code> |
| <code>what is bo id in nsdl?</code> | <code>The demat account number allotted to the beneficiary holder(s) by DP is known as the BO-ID. In CDSL it is 16 digits number. It is an intermediary (an institution) between the investor and the depository.</code> |
| <code>how much does it cost to run an electric fan all night?</code> | <code>The average indoor ceiling fan costs around 0.13c to 1.29c per hour to run, or between $1.90 and $18.85 each year. This will depend on the fan's speed settings, how frequently it's used, and the rate you pay on electricity. Like most electrical appliances, a ceiling fan's power is measured in watts.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### gooaq
* Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
* Size: 5,000 evaluation samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.8 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 60.68 tokens</li><li>max: 123 tokens</li></ul> |
* Samples:
| question | answer |
|:--------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>how much water should a person drink in 8 hours?</code> | <code>Health authorities commonly recommend eight 8-ounce glasses, which equals about 2 liters, or half a gallon. This is called the 8×8 rule and is very easy to remember. However, some health gurus believe that you need to sip on water constantly throughout the day, even when you're not thirsty.</code> |
| <code>what does this mean in excel #name?</code> | <code>Important: The #NAME? error signifies that something needs to be corrected in the syntax, so when you see the error in your formula, resolve it. Do not use any error-handling functions such as IFERROR to mask the error. To avoid typos in formula names, use the Formula Wizard in Excel.</code> |
| <code>are hydroflask good for the environment?</code> | <code>Hydro Flasks are a new fad among many students and adults to help minimize plastic waste in the oceans. Hydro Flasks are great because they use a type of metal called TempShield, which keeps your beverage or food either hot for up to six hours or cold for up to twenty-four hours.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | gooqa-dev_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:-------------------------:|
| -1 | -1 | - | - | 0.5368 |
| 0.0128 | 100 | 0.0756 | - | - |
| 0.0257 | 200 | 0.0701 | - | - |
| 0.0385 | 300 | 0.0661 | - | - |
| 0.0513 | 400 | 0.0655 | - | - |
| 0.0642 | 500 | 0.0667 | - | - |
| 0.0770 | 600 | 0.0683 | - | - |
| 0.0898 | 700 | 0.0665 | - | - |
| 0.1027 | 800 | 0.0641 | - | - |
| 0.1155 | 900 | 0.067 | - | - |
| 0.1283 | 1000 | 0.0655 | 0.0460 | 0.5366 |
| 0.1412 | 1100 | 0.065 | - | - |
| 0.1540 | 1200 | 0.0643 | - | - |
| 0.1668 | 1300 | 0.0607 | - | - |
| 0.1796 | 1400 | 0.0605 | - | - |
| 0.1925 | 1500 | 0.0596 | - | - |
| 0.2053 | 1600 | 0.0609 | - | - |
| 0.2181 | 1700 | 0.0611 | - | - |
| 0.2310 | 1800 | 0.0608 | - | - |
| 0.2438 | 1900 | 0.0613 | - | - |
| 0.2566 | 2000 | 0.0593 | 0.0393 | 0.5468 |
| 0.2695 | 2100 | 0.0599 | - | - |
| 0.2823 | 2200 | 0.0551 | - | - |
| 0.2951 | 2300 | 0.0559 | - | - |
| 0.3080 | 2400 | 0.0573 | - | - |
| 0.3208 | 2500 | 0.0532 | - | - |
| 0.3336 | 2600 | 0.0541 | - | - |
| 0.3465 | 2700 | 0.0554 | - | - |
| 0.3593 | 2800 | 0.055 | - | - |
| 0.3721 | 2900 | 0.0555 | - | - |
| 0.3850 | 3000 | 0.0537 | 0.0386 | 0.5452 |
| 0.3978 | 3100 | 0.057 | - | - |
| 0.4106 | 3200 | 0.0516 | - | - |
| 0.4235 | 3300 | 0.0535 | - | - |
| 0.4363 | 3400 | 0.0554 | - | - |
| 0.4491 | 3500 | 0.0509 | - | - |
| 0.4620 | 3600 | 0.0503 | - | - |
| 0.4748 | 3700 | 0.0496 | - | - |
| 0.4876 | 3800 | 0.0499 | - | - |
| 0.5004 | 3900 | 0.0525 | - | - |
| 0.5133 | 4000 | 0.0484 | 0.0355 | 0.5582 |
| 0.5261 | 4100 | 0.0497 | - | - |
| 0.5389 | 4200 | 0.0476 | - | - |
| 0.5518 | 4300 | 0.0476 | - | - |
| 0.5646 | 4400 | 0.0496 | - | - |
| 0.5774 | 4500 | 0.0497 | - | - |
| 0.5903 | 4600 | 0.0488 | - | - |
| 0.6031 | 4700 | 0.0486 | - | - |
| 0.6159 | 4800 | 0.048 | - | - |
| 0.6288 | 4900 | 0.0496 | - | - |
| 0.6416 | 5000 | 0.049 | 0.0333 | 0.5630 |
| 0.6544 | 5100 | 0.0476 | - | - |
| 0.6673 | 5200 | 0.047 | - | - |
| 0.6801 | 5300 | 0.0474 | - | - |
| 0.6929 | 5400 | 0.0514 | - | - |
| 0.7058 | 5500 | 0.0468 | - | - |
| 0.7186 | 5600 | 0.0477 | - | - |
| 0.7314 | 5700 | 0.0466 | - | - |
| 0.7443 | 5800 | 0.0449 | - | - |
| 0.7571 | 5900 | 0.0458 | - | - |
| 0.7699 | 6000 | 0.0438 | 0.0325 | 0.5636 |
| 0.7828 | 6100 | 0.047 | - | - |
| 0.7956 | 6200 | 0.0463 | - | - |
| 0.8084 | 6300 | 0.0435 | - | - |
| 0.8212 | 6400 | 0.0481 | - | - |
| 0.8341 | 6500 | 0.0448 | - | - |
| 0.8469 | 6600 | 0.0472 | - | - |
| 0.8597 | 6700 | 0.0445 | - | - |
| 0.8726 | 6800 | 0.0452 | - | - |
| 0.8854 | 6900 | 0.0448 | - | - |
| 0.8982 | 7000 | 0.0441 | 0.0307 | 0.5670 |
| 0.9111 | 7100 | 0.0428 | - | - |
| 0.9239 | 7200 | 0.044 | - | - |
| 0.9367 | 7300 | 0.0429 | - | - |
| 0.9496 | 7400 | 0.0436 | - | - |
| 0.9624 | 7500 | 0.0444 | - | - |
| 0.9752 | 7600 | 0.0419 | - | - |
| 0.9881 | 7700 | 0.043 | - | - |
| -1 | -1 | - | - | 0.5692 |
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 4.0.1
- Transformers: 4.50.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
clembench-playpen/Llama-3.1-70B-Instruct_KTO_Final_KTO_noSFT | clembench-playpen | 2025-04-07T03:17:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"kto",
"arxiv:2402.01306",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-70B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-06T10:28:56Z | ---
base_model: meta-llama/Llama-3.1-70B-Instruct
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- kto
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="clembench-playpen/Llama-3.1-70B-Instruct_KTO_Final_KTO_noSFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dmazzaccara_backup/playpen_Llama-3.1-70B-Instruct/runs/37ijmv2w)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
avijra/DeepSeek-R1-Medical-COT | avijra | 2025-04-07T03:16:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T03:15:44Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** avijra
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AaronZ345/TCSinger | AaronZ345 | 2025-04-07T03:15:14Z | 0 | 0 | null | [
"arxiv:2409.15977",
"region:us"
]
| null | 2025-04-07T02:27:28Z | # TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control
#### Yu Zhang, Ziyue Jiang, Ruiqi Li, Changhao Pan, Jinzheng He, Rongjie Huang, Chuxin Wang, Zhou Zhao | Zhejiang University
Checkpoints of [TCSinger (EMNLP 2024)](https://aclanthology.org/2024.emnlp-main.117/): Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control.
[](https://arxiv.org/abs/2409.15977)
[](https://github.com/AaronZ345/TCSinger) |
Shaund91/Hermes-3-Llama-3.1-8B-mlx-8Bit | Shaund91 | 2025-04-07T03:13:24Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"mlx-my-repo",
"en",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:quantized:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"8-bit",
"region:us"
]
| null | 2025-04-07T03:12:40Z | ---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- mlx
- mlx-my-repo
base_model: NousResearch/Hermes-3-Llama-3.1-8B
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: What is the meaning of life?
model-index:
- name: Hermes-3-Llama-3.1-70B
results: []
---
# Shaund91/Hermes-3-Llama-3.1-8B-mlx-8Bit
The Model [Shaund91/Hermes-3-Llama-3.1-8B-mlx-8Bit](https://huggingface.co/Shaund91/Hermes-3-Llama-3.1-8B-mlx-8Bit) was converted to MLX format from [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) using mlx-lm version **0.22.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Shaund91/Hermes-3-Llama-3.1-8B-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
DoomBreaker0902/depression_style_LoRA | DoomBreaker0902 | 2025-04-07T03:12:21Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-04T00:20:50Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in DEPRESSION style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - DoomBreaker0902/depression_style_LoRA
<Gallery />
## Model description
These are DoomBreaker0902/depression_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in DEPRESSION style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DoomBreaker0902/depression_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
marcuscedricridia/QwQ-R1984-32B | marcuscedricridia | 2025-04-07T03:10:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"abliterated",
"uncensored",
"SEARCH",
"conversational",
"en",
"base_model:Qwen/QwQ-32B",
"base_model:finetune:Qwen/QwQ-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:53:10Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/QwQ-32B
tags:
- abliterated
- uncensored
- SEARCH
library_name: transformers
---
# VIDraft/QwQ-R1984-32B
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini. QwQ-R1984-32B is an enhanced version based on QwQ-32B that incorporates additional features such as uncensored capabilities and deep research functionality. This allows for more unrestricted responses and in-depth information provision based on real-time web searches.
# This repo contains the QwQ-R1984-32B model, which has the following features:
- **Type:** Reasoning-enhanced Causal Language Model
- **Training Stage:** Pretraining, Supervised Finetuning, Reinforcement Learning, and Uncensoring
- **Architecture:** Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- **Number of Parameters:** 32.5B
- **Number of Parameters (Non-Embedding):** 31.0B
- **Number of Layers:** 64
- **Number of Attention Heads (GQA):** 40 for Q and 8 for KV
- **Context Length:** 8,000 tokens
- **Additional Features:**
- Deep research capabilities via web search
- Uncensored response generation
# Quickstart
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "VIDraft/QwQ-R1984-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
``` |
TareksLab/Feral-V2-LLaMa-70B | TareksLab | 2025-04-07T03:09:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:37:52Z | ---
base_model:
- SicariusSicariiStuff/Negative_LLAMA_70B
- ReadyArt/Forgotten-Safeword-70B-v5.0
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) as a base.
### Models Merged
The following models were included in the merge:
* [ReadyArt/Forgotten-Safeword-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-v5.0)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ReadyArt/Forgotten-Safeword-70B-v5.0
parameters:
weight: 0.40
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 0.40
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.40
density: 0.7
epsilon: 0.1
lambda: 1.0
merge_method: della_linear
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
```
|
bowilleatyou/9868de8c-66b1-4ee7-8bc0-678f8f448fb3 | bowilleatyou | 2025-04-07T03:08:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T02:11:54Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidcest65/davidclon | davidcest65 | 2025-04-07T03:06:10Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-04-07T02:24:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
adamtc/HSv2q | adamtc | 2025-04-07T03:02:55Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:29:52Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** adamtc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cfpark00/toy-multistep-tokenizer-nn_10-na_5 | cfpark00 | 2025-04-07T03:01:49Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T03:01:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinHyeong/dolly-v2-7b_reg0.1 | MinHyeong | 2025-04-07T03:00:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:53:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikeogezi/data_wp_output_gpt_4o_mini_style_595404_llama-3.1-8b-instruct_lora_128_sample_500 | mikeogezi | 2025-04-07T02:59:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T02:58:56Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tscstudios/usodhussdcd32qtgy6ppezh2gxb2_8421414f-36ad-4010-a3b5-b14e84aa653e | tscstudios | 2025-04-07T02:58:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-07T02:58:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Usodhussdcd32Qtgy6Ppezh2Gxb2_8421414F 36Ad 4010 A3B5 B14E84Aa653E
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/usodhussdcd32qtgy6ppezh2gxb2_8421414f-36ad-4010-a3b5-b14e84aa653e/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/usodhussdcd32qtgy6ppezh2gxb2_8421414f-36ad-4010-a3b5-b14e84aa653e', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/usodhussdcd32qtgy6ppezh2gxb2_8421414f-36ad-4010-a3b5-b14e84aa653e/discussions) to add images that show off what you’ve made with this LoRA.
|
memevis/WL75 | memevis | 2025-04-07T02:57:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:54:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memevis/WL74 | memevis | 2025-04-07T02:57:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:54:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZMC2019/Qwen2.5-1.5B-AMX | ZMC2019 | 2025-04-07T02:56:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:ZMC2019/AM1.4M",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-05T18:21:06Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: ZMC2019/AM1.4M
library_name: transformers
model_name: Qwen2.5-1.5B-AMX
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-AMX
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [ZMC2019/AM1.4M](https://huggingface.co/datasets/ZMC2019/AM1.4M) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ZMC2019/Qwen2.5-1.5B-AMX", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenzhuoming911/huggingface/runs/7m99qruf)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
memevis/WL79 | memevis | 2025-04-07T02:54:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:51:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ksu56yh65y45e/dysphic_render_LoRA | ksu56yh65y45e | 2025-04-07T02:54:25Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-07T02:53:34Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: render in DYSPHIC style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - ksu56yh65y45e/dysphic_render_LoRA
<Gallery />
## Model description
These are ksu56yh65y45e/dysphic_render_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use render in DYSPHIC style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](ksu56yh65y45e/dysphic_render_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mohammadmahdinouri/expressive-distilled-checkpoints-2 | mohammadmahdinouri | 2025-04-07T02:53:38Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-21T15:35:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KCourtney/bert-base-uncased-issues-128 | KCourtney | 2025-04-07T02:52:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-04-07T02:35:37Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1014 | 1.0 | 291 | 1.7049 |
| 1.6352 | 2.0 | 582 | 1.5080 |
| 1.4965 | 3.0 | 873 | 1.3509 |
| 1.3996 | 4.0 | 1164 | 1.3444 |
| 1.333 | 5.0 | 1455 | 1.2414 |
| 1.2871 | 6.0 | 1746 | 1.3665 |
| 1.2358 | 7.0 | 2037 | 1.2885 |
| 1.2016 | 8.0 | 2328 | 1.3422 |
| 1.1692 | 9.0 | 2619 | 1.2215 |
| 1.145 | 10.0 | 2910 | 1.1708 |
| 1.1269 | 11.0 | 3201 | 1.1325 |
| 1.1127 | 12.0 | 3492 | 1.1719 |
| 1.0898 | 13.0 | 3783 | 1.2175 |
| 1.0759 | 14.0 | 4074 | 1.2070 |
| 1.0764 | 15.0 | 4365 | 1.2166 |
| 1.0608 | 16.0 | 4656 | 1.2341 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
memevis/WL76 | memevis | 2025-04-07T02:51:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:48:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memevis/WL77 | memevis | 2025-04-07T02:51:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:48:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kk-aivio/e7f9bd05-2796-4fb5-a462-6cd3d7cd921e | kk-aivio | 2025-04-07T02:50:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:059ecdfd65a82976_train_data.json",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"region:us"
]
| null | 2025-04-07T02:50:19Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 059ecdfd65a82976_train_data.json
base_model: fxmarty/tiny-dummy-qwen2
model-index:
- name: kk-aivio/e7f9bd05-2796-4fb5-a462-6cd3d7cd921e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/e7f9bd05-2796-4fb5-a462-6cd3d7cd921e
This model was trained from scratch on the /workspace/input_data/059ecdfd65a82976_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.0
- Transformers 4.50.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
LeiChenChen/qwen1.5b-distill-chinese-deepseek-110k | LeiChenChen | 2025-04-07T02:48:48Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
]
| null | 2025-03-30T01:33:21Z | 请根据给定问题,按照以下格式生成答案:首先,展示你在思考问题时的过程(用<think>标签包裹),然后给出最终的答案(用<answer>标签包裹)。确保思考过程清晰,逐步推理,并最终给出完整的回答。
\### Input:
Continue the fibonnaci sequence:1, 1, 2, 3, 5, 8
\### Response:
\<think>嗯,用户让我继续这个斐波那契数列的下一个数字。原来的序列是1、1、2、3、5、8。我需要先确认一下用户的问题是否正确理解了。看起来他们可能希望从已经给出的数列中找到缺失的部分,或者可能是在测试我的记忆能力?不过通常来说,这样的任务应该是要填空,比如在某个位置上缺少一个数字,然后让整个数列连贯起来。所以正确的做法应该是在每个位置填入对应的数值。
现在原数列为:1, 1, 2, 3, 5, 8。接下来的数字应该怎么找呢?我记得斐波那契数列的规律是从前两个数相加得到第三个数,即aₙ = aₙ₋₁ + aₙ₋₂。那么现在已知的数是:
- 第0项:1
- 第1项:1
- 第2项:1
- 第3项:2=1+1
- 第4项:3=2+1
- 第5项:5=3+2
- 第6项:8=5+3?
这样的话,第六个数就是8+5等于13对吗?因为这样递推下去的话,每一步都是前两项之和。所以如果按顺序填写的话,应该是:
1, 1, 2, 3, 5, 8, 13, ... 这样往下推的话,第7项应该是8+5=13,也就是13。但这里可能存在不同的计算方式,比如有时候可能会有不同的起始点或者不同的规则。例如,有些资料里提到斐波那契数列可以看作从0开始的,也就是说第一个元素是0而不是1。如果是这样的话,原数列中的1会被视为第二个元素,而之前的数就不一样了。
这个时候就需要明确用户的定义了。假设这里的数列是以0为起点,那么原来的数列会变成:0,1,1,2,3,5,8... 对应的斐波那契数列。这时候,接下来的数字就会有所不同。比如,第二步应该是0到1,第三步是1到1,第四步是1到2,第五步是2到3,第六步是3到5,第七步是5到8,第八步是8到13等等。因此,在这种情况下,正确的数列应该是0,1,1,2,3,5,8,13,... 所以当问到“下一个是多少”的时候,应该填13才对。
但是用户提供的例子中的数列开头是1、1、2、3、5、8,这显然不符合常见的斐波那契数列的标准定义,因为它没有包含0作为首项的情况。因此,我认为用户可能在这里遇到了错误的理解,或者是一个特定版本的数列。在这种情况下,我需要指出这一点,并提供两种可能性供选择。
另外,也有可能用户想的是另一种情况,比如将数列中的每个数字都减去1之后再进行处理,或者是否有其他排列方式。例如,如果有某种隐藏的模式,比如每隔一定数量的项后重复某些值,但这似乎不太合理。
综上所述,最合理的解释应该是基于标准的斐波那契数列,其中起始项是0或1。因此,正确的答案取决于初始条件的选择。如果用户指的是起始于0的数列,则答案是13;如果指的是起始于1的常规数列,则答案是13。为了准确答复,我需要询问用户具体的起始项,否则无法确定正确的答案。
\</think> \<answer>
斐波那契数列的规律通常是 **从0或1开始** 的。若以0为起始项,则数列为:
0, 1, 1, 2, 3, 5, 8, ...
但如果遵循常规起始点1,则后续数字依次为:
1, 1, 2, 3, 5, 8,...
您的当前数列(1,1,2,3,5,8)未涵盖所有项,需补充完整。若按常规定义,下一个数字应为 **13**(5+8=13)。\</answer><|im_end|> |
XWangxh/Taxi-v3 | XWangxh | 2025-04-07T02:48:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-07T02:48:44Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="XWangxh/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TongZheng1999/gemma-2-9b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-2 | TongZheng1999 | 2025-04-07T02:45:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T23:17:51Z | ---
library_name: transformers
model_name: gemma-2-9b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-2
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for gemma-2-9b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/gemma-2-9b-it-star-nl-OP-final_v2_10-2-3Rounds-iter-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/0rs06ltc)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
simplyshanna/justshanna2 | simplyshanna | 2025-04-07T02:45:21Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-07T02:17:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: justshanna2
---
# Justshanna2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `justshanna2` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "justshanna2",
"lora_weights": "https://huggingface.co/simplyshanna/justshanna2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('simplyshanna/justshanna2', weight_name='lora.safetensors')
image = pipeline('justshanna2').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/simplyshanna/justshanna2/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/soil-gemini-GGUF | mradermacher | 2025-04-07T02:45:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:denxxs/soil-gemini",
"base_model:quantized:denxxs/soil-gemini",
"license:wtfpl",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-07T02:20:38Z | ---
base_model: denxxs/soil-gemini
language:
- en
library_name: transformers
license: wtfpl
license_link: LICENSE
license_name: dennis
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/denxxs/soil-gemini
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/soil-gemini-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/soil-gemini-GGUF/resolve/main/soil-gemini.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gogobo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_pudgy_peacock | gogobo | 2025-04-07T02:42:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am jumping pudgy peacock",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-04T11:00:06Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_pudgy_peacock
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am jumping pudgy peacock
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_pudgy_peacock
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gogobo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_pudgy_peacock", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
memevis/WL70 | memevis | 2025-04-07T02:41:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:39:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Praful69/Merge2 | Praful69 | 2025-04-07T02:41:45Z | 455 | 0 | null | [
"safetensors",
"llama",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-09T05:17:18Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
--- |
mradermacher/Mixtral-v0.1-8x7B-GGUF | mradermacher | 2025-04-07T02:41:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-06T21:22:44Z | ---
base_model: LlamaFinetuneBase/Mixtral-v0.1-8x7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LlamaFinetuneBase/Mixtral-v0.1-8x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-v0.1-8x7B-GGUF/resolve/main/Mixtral-v0.1-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso17/0ddb719d-7555-4184-9dfa-8d539a586dd5 | lesso17 | 2025-04-07T02:41:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-07T02:06:33Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0ddb719d-7555-4184-9dfa-8d539a586dd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 338b5ba28c840830_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/338b5ba28c840830_train_data.json
type:
field_input: privacy_mask
field_instruction: masked_text
field_output: unmasked_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso17/0ddb719d-7555-4184-9dfa-8d539a586dd5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000217
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/338b5ba28c840830_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 170
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bef76c94-f307-4eb1-80a3-90616305ebf2
wandb_project: 17a
wandb_run: your_name
wandb_runid: bef76c94-f307-4eb1-80a3-90616305ebf2
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0ddb719d-7555-4184-9dfa-8d539a586dd5
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000217
- train_batch_size: 4
- eval_batch_size: 4
- seed: 170
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.5994 |
| 0.0023 | 0.0809 | 500 | 0.0023 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MinaMila/gemma2_9b_Adult_7ep_22 | MinaMila | 2025-04-07T02:39:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-2-9b",
"base_model:finetune:unsloth/gemma-2-9b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-07T02:36:26Z | ---
base_model: unsloth/gemma-2-9b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YoungjaeNa/na-ko-6b-text2sql | YoungjaeNa | 2025-04-07T02:37:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-07T02:29:17Z | ---
license: apache-2.0
---
|
genki10/Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold4 | genki10 | 2025-04-07T00:38:24Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-07T00:22:57Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8567
- Qwk: 0.4601
- Mse: 0.8567
- Rmse: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 7.7230 | 0.0018 | 7.7230 | 2.7790 |
| No log | 2.0 | 10 | 3.7022 | 0.0040 | 3.7022 | 1.9241 |
| No log | 3.0 | 15 | 1.8738 | 0.0509 | 1.8738 | 1.3689 |
| No log | 4.0 | 20 | 1.1164 | 0.0107 | 1.1164 | 1.0566 |
| No log | 5.0 | 25 | 1.1792 | 0.0107 | 1.1792 | 1.0859 |
| No log | 6.0 | 30 | 0.8684 | 0.1514 | 0.8684 | 0.9319 |
| No log | 7.0 | 35 | 1.5485 | 0.1531 | 1.5485 | 1.2444 |
| No log | 8.0 | 40 | 0.7063 | 0.4789 | 0.7063 | 0.8404 |
| No log | 9.0 | 45 | 1.5970 | 0.1683 | 1.5970 | 1.2637 |
| No log | 10.0 | 50 | 0.6996 | 0.3603 | 0.6996 | 0.8364 |
| No log | 11.0 | 55 | 0.7871 | 0.4002 | 0.7871 | 0.8872 |
| No log | 12.0 | 60 | 0.5792 | 0.4541 | 0.5792 | 0.7611 |
| No log | 13.0 | 65 | 0.6675 | 0.5302 | 0.6675 | 0.8170 |
| No log | 14.0 | 70 | 0.5936 | 0.5025 | 0.5936 | 0.7705 |
| No log | 15.0 | 75 | 0.6064 | 0.5651 | 0.6064 | 0.7787 |
| No log | 16.0 | 80 | 0.7287 | 0.5474 | 0.7287 | 0.8537 |
| No log | 17.0 | 85 | 1.2337 | 0.3222 | 1.2337 | 1.1107 |
| No log | 18.0 | 90 | 0.6381 | 0.5695 | 0.6381 | 0.7988 |
| No log | 19.0 | 95 | 0.6365 | 0.5894 | 0.6365 | 0.7978 |
| No log | 20.0 | 100 | 1.2623 | 0.3490 | 1.2623 | 1.1235 |
| No log | 21.0 | 105 | 1.0040 | 0.4510 | 1.0040 | 1.0020 |
| No log | 22.0 | 110 | 1.1975 | 0.3715 | 1.1975 | 1.0943 |
| No log | 23.0 | 115 | 1.1188 | 0.3913 | 1.1188 | 1.0578 |
| No log | 24.0 | 120 | 0.9801 | 0.4109 | 0.9801 | 0.9900 |
| No log | 25.0 | 125 | 0.7136 | 0.5109 | 0.7136 | 0.8448 |
| No log | 26.0 | 130 | 0.9630 | 0.4389 | 0.9630 | 0.9813 |
| No log | 27.0 | 135 | 1.0872 | 0.3892 | 1.0872 | 1.0427 |
| No log | 28.0 | 140 | 0.8670 | 0.4530 | 0.8670 | 0.9311 |
| No log | 29.0 | 145 | 0.7710 | 0.4951 | 0.7710 | 0.8781 |
| No log | 30.0 | 150 | 0.7506 | 0.5060 | 0.7506 | 0.8664 |
| No log | 31.0 | 155 | 0.8108 | 0.4555 | 0.8108 | 0.9005 |
| No log | 32.0 | 160 | 1.0215 | 0.3629 | 1.0215 | 1.0107 |
| No log | 33.0 | 165 | 0.7301 | 0.5286 | 0.7301 | 0.8544 |
| No log | 34.0 | 170 | 0.8567 | 0.4601 | 0.8567 | 0.9256 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
taha454/poca-SoccerTwos | taha454 | 2025-04-07T00:35:13Z | 0 | 0 | ml-agents | [
"ml-agents",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2025-04-07T00:35:12Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: taha454/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Brianpuz/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF | Brianpuz | 2025-04-07T00:34:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-07T00:34:26Z | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Brianpuz/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF
Absolutely tremendous! This repo features **GGUF quantized** versions of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) — made possible using the *very powerful* `llama.cpp`. Believe me, it's fast, it's smart, it's winning.
## Quantized Versions:
Only the best quantization. You’ll love it.
## Run with llama.cpp
Just plug it in, hit the command line, and boom — you're running world-class AI, folks:
```bash
llama-cli --hf-repo Brianpuz/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -p "AI First, but also..."
```
This beautiful Hugging Face Space was brought to you by the **amazing team at [Antigma Labs](https://antigma.ai)**. Great people. Big vision. Doing things that matter — and doing them right.
Total winners.
|
gaunernst/gemma-3-12b-it-qat-autoawq | gaunernst | 2025-04-07T00:33:13Z | 0 | 0 | null | [
"safetensors",
"gemma3",
"gemma",
"image-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"4-bit",
"awq",
"region:us"
]
| image-text-to-text | 2025-04-06T16:28:57Z | ---
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-12b-it
tags:
- gemma
- gemma3
---
# Gemma 3 12B Instruction-tuned QAT AutoAWQ
This checkpoint was converted from https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf to AutoAWQ format and BF16 dtype (hence, not lossess). The vision tower was transplanted from https://huggingface.co/google/gemma-3-12b-it.
Below is the original model card.
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
> [!Note]
> This repository corresponds to the 12B **instruction-tuned** version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT).
> The GGUF corresponds to Q4_0 quantization.
>
> Thanks to QAT, the model is able to preserve similar quality as `bfloat16` while significantly reducing the memory requirements
> to load the model.
>
> You can find the half-precision version [here](https://huggingface.co/google/gemma-3-12b-it).
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Usage
Below, there are some code snippets on how to get quickly started with running the model.
**llama.cpp (text-only)**
```sh
./llama-cli -hf google/gemma-3-12b-it-qat-q4_0-gguf -p "Write a poem about the Kraken."
```
**llama.cpp (image input)**
```sh
wget https://github.com/bebechien/gemma/blob/main/surprise.png?raw=true -O ~/Downloads/surprise.png
./llama-gemma3-cli -hf google/gemma-3-12b-it-qat-q4_0-gguf -p "Describe this image." --image ~/Downloads/surprise.png
```
**ollama (text only)**
Using GGUFs with Ollama via Hugging Face does not support image inputs at the moment. Please check the [docs on running gated repositories](https://huggingface.co/docs/hub/en/ollama#run-private-ggufs-from-the-hugging-face-hub).
```sh
ollama run hf.co/google/gemma-3-12b-it-qat-q4_0-gguf
```
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805 |
xmarata/llama_level_generator_v1 | xmarata | 2025-04-07T00:31:42Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T00:31:38Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xmarata
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hollywoodfrancis/simpleLLM | hollywoodfrancis | 2025-04-07T00:30:53Z | 0 | 0 | null | [
"finance",
"coding",
"text-generation",
"dataset:deepmind/code_contests",
"dataset:Muennighoff/natural-instructions",
"dataset:bigcode/the-stack-v2",
"dataset:Shuu12121/python-codesearch-dataset-open",
"dataset:Jobey1/Collection_Crypto_financial_trading_reasearch",
"dataset:benstaf/nasdaq_news_sentiment",
"base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"base_model:finetune:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"license:other",
"region:us"
]
| text-generation | 2025-04-07T00:08:58Z | ---
license: other
license_name: hollywood-francis100
license_link: LICENSE
datasets:
- deepmind/code_contests
- Muennighoff/natural-instructions
- bigcode/the-stack-v2
- Shuu12121/python-codesearch-dataset-open
- Jobey1/Collection_Crypto_financial_trading_reasearch
- benstaf/nasdaq_news_sentiment
pipeline_tag: text-generation
tags:
- finance
- coding
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
--- |
y0usly/MisfitCarti_250_Epochs | y0usly | 2025-04-07T00:30:45Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2024-05-05T17:32:23Z | ---
license: other
license_name: carti
license_link: LICENSE
---
|
billybillys/phi4_sql_model | billybillys | 2025-04-07T00:30:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T00:30:12Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** billybillys
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
krishnathalapathy/robertafinalaug | krishnathalapathy | 2025-04-07T00:29:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-07T00:29:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JEFFERSONMUSIC/MJMatureEraV240K | JEFFERSONMUSIC | 2025-04-07T00:29:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-07T00:27:10Z | ---
license: apache-2.0
---
|
jrolf/finetuned-tinyllama | jrolf | 2025-04-07T00:29:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T00:29:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lilian5657/mi-modelo-checkpoint | lilian5657 | 2025-04-07T00:27:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"region:us"
]
| null | 2025-04-07T00:25:31Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
quwsarohi/SmolThink | quwsarohi | 2025-04-07T00:24:19Z | 68 | 0 | null | [
"safetensors",
"llama",
"R1",
"CoT",
"en",
"dataset:ServiceNow-AI/R1-Distill-SFT",
"dataset:Jofthomas/hermes-function-calling-thinking-V1",
"dataset:GeneralReasoning/GeneralThought-195K",
"dataset:open-r1/codeforces-cots",
"arxiv:2502.12143",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-27T21:34:17Z | ---
license: apache-2.0
datasets:
- ServiceNow-AI/R1-Distill-SFT
- Jofthomas/hermes-function-calling-thinking-V1
- GeneralReasoning/GeneralThought-195K
- open-r1/codeforces-cots
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
tags:
- R1
- CoT
---
# SmolThink: A Small Model That Tries to Think
**SmolThink** model is a Continued Supervised Fine-Tuned version of [SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) on **Deepseek-R1** distilled dataset.
Training code and a portion of dataset can be found [QuwsarOhi/SmolThink](https://github.com/QuwsarOhi/SmolThink)
## Trainin Process
The model was trained on a mixture of small Chain of Thoughts (CoT) and some long CoT dataset. Small CoT dataset mixture was used as the model is small and it is was reported that small models struggle to produce long reasoning chain [ref](https://arxiv.org/abs/2502.12143).
The SFT dataset had been created from the following data mixtures:
* [ServiceNow-AI/R1-Distill-SFT](https://huggingface.co/datasets/ServiceNow-AI/R1-Distill-SFT)
* [Jofthomas/hermes-function-calling-thinking-V1](https://huggingface.co/datasets/Jofthomas/hermes-function-calling-thinking-V1)
* [GeneralReasoning/GeneralThought-195K](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-195K)
* [open-r1/codeforces-cots](https://huggingface.co/datasets/open-r1/codeforces-cots)
* [XeTute/Open-Coding-Thoughts (currently unavailable)](https://huggingface.co/datasets/XeTute/Open-Coding-Thoughts)
* Custom tool calling and websearch summarizing dataset generated using **Phi-3.5**, **Qwen2: deepseek-r1:7b**. Data generation pipeline is described [here](#)
The datasets were filtered by removing the contents having CoT length more than 256 words. The model was trained to produce tool calls. As being a small language model, the model does not memorize certain things (example: how to bake cake). Rather, if used with web search, the model may produce good quality of answer regardless of the size.
The model was supervised fine-tuned in two phases:
* The model was initially trained with a rolling context length of `832` with a token stride of `832/8`. Dataset snapshot is available as [merged_dataset_phase1](#).
* The model was again trained with a rolling context length of `3072` with a token stride of `768`. Dataset snapshot is available as [merged_dataset_phase2](#)
The model is still under training and the whole training and dataset mixtures would be published soon. The model is trained on MacBook Air with 16GB unified memory.
## Usage
### General Usage
Use the following code to load and use the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = 'mps'
tokenizer = AutoTokenizer.from_pretrained(
"quwsarohi/SmolThink"
)
model = AutoModelForCausalLM.from_pretrained(
"quwsarohi/SmolThink",
low_cpu_mem_usage=True,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
use_cache=False,
tie_word_embeddings=True,
).to(device)
messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### WebSearch Tool Integration
The model is further trained to do web search using a special websearch tool. The following code could be used to use the web searching capability.
```python
webtool_def = {
"type": "function",
"function": {
"name": "web_search",
"description": "Can search the web for infomation which are doubtful/unknown/recent.",
"parameters": {
"type": "object",
"properties": {
"search_str": {
"type": "string",
"description": "The whole question you want to ask.",
"required": True,
}
},
},
},
}
base_prompt = tokenizer.apply_chat_template([
{"role": "user", "content": "What is the current stock price of Apple?"}
], tools=[webtool_def], tokenize=False, add_generation_prompt=True)
print(base_prompt)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
``` |
zizi917/tinyllama-dpo-pairrm | zizi917 | 2025-04-07T00:24:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
]
| null | 2025-04-07T00:23:57Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4 | tscstudios | 2025-04-07T00:22:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-07T00:22:54Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Xtvxej0Hszbmd6Uyappaosu73Ml1_C8D22C85 Fd9E 4576 A1A2 D37Dcb8379B4
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4/discussions) to add images that show off what you’ve made with this LoRA.
|
LaurelApollo/onepiece_style_LoRA | LaurelApollo | 2025-04-07T00:11:20Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-07T00:11:10Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: art in DISCOELYSIUM style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - LaurelApollo/onepiece_style_LoRA
<Gallery />
## Model description
These are LaurelApollo/onepiece_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use art in DISCOELYSIUM style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](LaurelApollo/onepiece_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
LuxeQueer/content-classIfier | LuxeQueer | 2025-04-07T00:09:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-07T00:09:03Z | ---
license: apache-2.0
---
|
ColabUser/Chatgptchan | ColabUser | 2025-04-07T00:07:19Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2025-04-07T00:06:06Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/chatgpt chan (11).png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Chatgptgirl, Chatgpt logo, eyeless
---
# ChatGpt chan
<Gallery />
## Trigger words
You should use `Chatgptgirl` to trigger the image generation.
You should use `Chatgpt logo` to trigger the image generation.
You should use `eyeless` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ColabUser/Chatgptchan/tree/main) them in the Files & versions tab.
|
dgambettaphd/M_llm3_gen5_run0_W_doc1000_synt64_FTP | dgambettaphd | 2025-04-07T00:03:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-07T00:03:35Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
newchangertech/pavement7bv1 | newchangertech | 2025-04-07T00:02:00Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-03-31T21:03:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trashpanda-org/Qwen2.5-32B-Marigold-v0-exp | trashpanda-org | 2025-04-07T00:00:48Z | 68 | 1 | null | [
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-27T16:57:36Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-32B
---

# Qwen2.5-32B-Marigold-v0-pre-release
<i>Second-runner-up to v0</i>
<p><b>Severian's notes</b>: Between three models, this came second. I personally preferred some of the responses I got while testing this model over actual v0, and some of ours testers did the same. It's a little more unhinged than v0, so if that's what you're looking for, consider trying this out.</p>
## Recommended settings
<p><b>Context/instruct template</b>: Chatml for the base experience, Mistral V7 <i>shouldn't work</i> and results to some system token bleed, but the prose can be dramatic and expressive with it too, try at your own risk.</p>
<p><b>Samplers</b>: temperature at 0.9, min_p at 0.05, top_a at 0.3, TFS at 0.75, repetition_penalty at 1.03, DRY if you have access to it.</p>
A virt-io derivative prompt worked best during our testing, but feel free to use what you like.
## Thank you!
Big thanks to the folks in the trashpanda-org discord for testing and sending over some logs!
(datasets to be attributed later here)
## Reviews
> Feels rather tame compared to v0. However, I still enjoyed this model and its reasoning. Had 0 issues with latching onto the character personalization. Slight impersation was heavy with this model, but it wasn't anything too terrible. I enjoyed it very much!
— Mooth
## A peek into Hasnonname's thoughts during testing



## Some logs










|
Delta-Vector/Hamanasu-4B-Chat-Brainrot | Delta-Vector | 2025-04-06T23:58:41Z | 15 | 1 | null | [
"safetensors",
"llama",
"qwen",
"roleplay",
"finetune",
"storywriting",
"dataset:NewEden/BlueSky-10K-Complexity",
"dataset:NewEden/Basket-Weaving-Filtered",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:NewEden/Misc-Data-Sharegpt-Prefixed",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:NewEden/LIMARP-Complexity",
"dataset:NewEden/Creative_Writing-Complexity",
"dataset:NewEden/OpenCAI-ShareGPT",
"dataset:NewEden/Storium-Prefixed-Clean",
"dataset:NewEden/DeepseekRP-Filtered",
"base_model:Delta-Vector/Hamanasu-4B-Instruct-KTO-V2",
"base_model:finetune:Delta-Vector/Hamanasu-4B-Instruct-KTO-V2",
"region:us"
]
| null | 2025-03-27T22:01:54Z | ---
thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/jg2NWmCUfPyzizm2USjMt.jpeg"
datasets:
- NewEden/BlueSky-10K-Complexity
- NewEden/Basket-Weaving-Filtered
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- NewEden/Misc-Data-Sharegpt-Prefixed
- PocketDoc/Dans-Personamaxx-VN
- NewEden/LIMARP-Complexity
- NewEden/Creative_Writing-Complexity
- NewEden/OpenCAI-ShareGPT
- NewEden/Storium-Prefixed-Clean
- NewEden/DeepseekRP-Filtered
base_model:
- Delta-Vector/Hamanasu-4B-Instruct-KTO-V2
tags:
- qwen
- roleplay
- finetune
- storywriting
---
<!DOCTYPE html>
<style>
html, body {
background: black;
color: #c9d1d9 !important;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
margin: 0;
padding: 0;
min-height: 100vh;
}
.markdown-body {
color: white;
margin: 40px auto;
padding: 40px;
border-radius: 12px;
position: relative;
overflow: hidden;
}
.markdown-body::after {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: #0c0f18; /* background color */
pointer-events: none;
z-index: -999;
}
h1, h2, h3 {
background: linear-gradient(45deg, #6e00ff, #00ffff);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
border-bottom: 1px solid #333;
padding-bottom: 0.3em;
}
div[style*="border:2px solid #333"],
div[style*="border: 2px solid #333"],
div[style*="border:1px solid #333"],
div[style*="border: 1px solid #333"] {
background: rgba(22, 27, 34, 0.8) !important;
border: 2px solid #6e00ff !important;
box-shadow: 0 0 15px rgba(110, 0, 255, 0.5);
border-radius: 10px;
padding: 20px;
margin: 20px 0;
}
code {
background-color: #1a1a1a !important;
border-radius: 4px;
padding: 0.2em 0.4em;
color: #00ffff;
}
pre {
background-color: #1a1a1a !important;
border: 1px solid #333;
border-radius: 8px;
padding: 16px;
}
table {
width: 100%;
border-collapse: collapse;
margin: 20px 0;
background: rgba(0,0,0,0.2);
table-layout: fixed;
color: white;
}
th, td {
border: 1px solid #333;
padding: 12px;
text-align: center;
color: white;
}
th {
background: rgba(110, 0, 255, 0.1);
}
td:nth-child(1) {
width: 1%;
white-space: nowrap;
}
td:nth-child(2) {
width: 100%;
}
td > span {
display: block;
padding: 4px 8px;
background: rgba(110, 0, 255, 0.1);
border-radius: 4px;
transition: all 0.3s ease;
}
td > span:hover {
background: rgba(110, 0, 255, 0.2);
transform: translateY(-1px);
}
a {
color: #00ffff;
text-decoration: none;
transition: all 0.3s ease;
}
a:hover {
color: #6e00ff;
text-decoration: none;
}
hr {
border: 0;
height: 1px;
background: linear-gradient(90deg, transparent, #333, transparent);
margin: 40px 0;
}
img {
max-width: 100%;
border-radius: 10px;
}
details summary:hover {
color: #00ffff;
}
* {
color-scheme: dark !important;
}
.prose, .max-w-none, .px-4 {
background-color: transparent !important;
color: #c9d1d9 !important;
}
</style>
<body>
<div class="markdown-body">
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/o5WjJKA9f95ri9UzRxZQE.png" alt="Model Visualization" width="500px" style="border: 3px solid #333; box-shadow: 0 0 15px rgba(66, 0, 131, 0.5);" />
<br>
<br>
<div style="font-size:1.5em; font-weight:bold; background: linear-gradient(45deg, #6e00ff, #00ffff); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">
Hamanasu 4B
</div>
</div>
<div style="border:1px solid #333; border-radius:10px; padding:20px; margin:20px 0; background: rgba(0,0,0,0.4);">
## 🌌 Overview
<i>This model is the Chat tune of the Instruct model, More accurately it is the "brainrotted" version, Finetuned with Bsky, 4chan and Discord logs, Its... really something beautiful. </i>
<i>The model is suited best towards being a highly dumb chat partner rather then regular RP</i>
<i>The model is suited for traditional RP, All thanks to Tav for funding the train.</i>
<i>Support me and my finetunes on Ko-Fi https://ko-fi.com/deltavector</i>
</div>
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); gap: 20px; margin: 20px 0;">
<div style="border:2px solid #333; border-radius:10px; padding:20px; background: rgba(0,0,0,0.2);">
### ⚔️ Hardware
- 8x H100s
- Epochs: 4
- Base: `Delta-Vector/Hamanasu-4B-Instruct-KTO-V2`
</div>
</div>
<div style="border: 2px solid #6e00ff; border-radius: 10px; padding: 20px; margin: 20px 0; box-shadow: 0 0 15px rgba(110, 0, 255, 0.5);">
## 💰 Prompting
<i>This model uses ChatML formatting</i>
```python
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
Take off your helmet.<|im_end|>
<|im_start|>No i shall not. This is the way.
```
</div>
<div style="border: 2px solid #6e00ff; border-radius: 10px; padding: 20px; margin: 20px 0; box-shadow: 0 0 15px rgba(110, 0, 255, 0.5);">
## 🎲 Recommended Sampler Preset
```yml
ST sampler preset: https://files.catbox.moe/wtkp0l.json
System prompt: Blank.
```
</div>
<div style="border: 2px solid #6e00ff; border-radius: 10px; padding: 20px; margin: 20px 0; box-shadow: 0 0 15px rgba(110, 0, 255, 0.5);">
## Axolotl Config ꒰(˶• ᴗ •˶)꒱
<details>
```yaml
ase_model: ./model
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
hub_model_id: NewEden/Hamanasu-4B-RP-v2
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
## qlora COPE
load_in_8bit: false
load_in_4bit: false
strict: false
## data
datasets:
- path: NewEden/Discord-Filtered
type: dan-chat-advanced
- path: NewEden/Basket-Weaving-Filtered
type: dan-chat-advanced
- path: NewEden/Misc-Data-Sharegpt-Prefixed
type: dan-chat-advanced
- path: NewEden/BlueSky-10K-Complexity
type: dan-chat-advanced
- path: PocketDoc/Dans-Kinomaxx-VanillaBackrooms
type: dan-chat-advanced
- path: PocketDoc/Dans-Personamaxx-VN
type: dan-chat-advanced
- path: NewEden/LIMARP-Complexity
type: dan-chat-advanced
- path: NewEden/OpenCAI-ShareGPT
type: dan-chat-advanced
- path: NewEden/Creative_Writing-Complexity
type: dan-chat-advanced
- path: NewEden/DeepseekRP-Filtered
type: dan-chat-advanced
- path: NewEden/Storium-Prefixed-Clean
type: dan-chat-advanced
shuffle_merged_datasets: true
dataset_prepared_path: dataset_prepared-2
val_set_size: 0.01
output_dir: 4b-out
## LIGGER
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
## CTX settings
sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
## Lora
#adapter: lora
#lora_model_dir:
#lora_r: 128
#lora_alpha: 16
#lora_dropout: 0.05
#lora_target_modules:
# - gate_proj
# - down_proj
# - up_proj
# - q_proj
# - v_proj
# - k_proj
# - o_proj
#lora_fan_in_fan_out:
#peft_use_rslora: true
#lora_modules_to_save:
# - embed_tokens
# - lm_head
## WandB
wandb_project: tavbussy
wandb_entity:
wandb_watch:
wandb_name: chat-v2
wandb_log_model:
## evals
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
## hoe params
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 2e-5
max_grad_norm: 0.2
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
saves_per_epoch: 2
debug:
deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.02
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details>
</div>
<div align="center">
<div style="border: 2px solid #6e00ff; border-radius: 10px; padding: 20px; margin: 20px 0; box-shadow: 0 0 15px rgba(110, 0, 255, 0.5);">
## ⚡ Credits
<div style="display: flex; justify-content: center;">
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 10px; margin: 20px 0; max-width: 600px;">
<div style="border:1px solid #333; padding:10px; border-radius:5px; text-align:center; background: rgba(0,0,0,0.2); display: flex; align-items: center; justify-content: center;">
<a href="https://huggingface.co/lucyknada">
<img src="https://img.shields.io/badge/%F0%9F%8C%9F-Lucy_Knada-blueviolet" alt="Lucy Knada">
</a>
</div>
<div style="border:1px solid #333; padding:10px; border-radius:5px; text-align:center; background: rgba(0,0,0,0.2); display: flex; align-items: center; justify-content: center;">
<a href="https://huggingface.co/hamanasu">
<img src="https://img.shields.io/badge/%E2%9A%94%EF%B8%8F-jeiku-blueviolet" alt="Ruka">
</a>
</div>
<div style="border:1px solid #333; padding:10px; border-radius:5px; text-align:center; background: rgba(0,0,0,0.2); display: flex; align-items: center; justify-content: center;">
<a href="https://huggingface.co/intervitens">
<img src="https://img.shields.io/badge/%F0%9F%9B%A1%EF%B8%8F-Intervitens-blueviolet" alt="Intervitens">
</a>
</div>
<div style="border:1px solid #333; padding:10px; border-radius:5px; text-align:center; background: rgba(0,0,0,0.2); display: flex; align-items: center; justify-content: center;">
<a href="https://huggingface.co/kalomaze">
<img src="https://img.shields.io/badge/%F0%9F%94%AE-Kalomaze-blueviolet" alt="Kalomaze">
</a>
</div>
<div style="border:1px solid #333; padding:10px; border-radius:5px; text-align:center; background: rgba(0,0,0,0.2); display: flex; align-items: center; justify-content: center;">
<a href="https://huggingface.co/kubernetes-bad">
<img src="https://img.shields.io/badge/%E2%9A%A1-Kubernetes_Bad-blueviolet" alt="Kubernetes Bad">
</a>
</div>
<div style="border:1px solid #333; padding:10px; border-radius:5px; text-align:center; background: rgba(0,0,0,0.2); display: flex; align-items: center; justify-content: center;">
<a href="https://huggingface.co/anthracite-org">
<img src="https://img.shields.io/badge/%F0%9F%8C%91-Anthracite-blueviolet" alt="Anthracite">
</a>
</div>
</div>
</div>
</div>
---
<div align="center">
<div style="font-size:0.8em; opacity:0.8;">Made by</div>
<div style="font-size:1.2em; font-weight:bold; background: linear-gradient(45deg, #6e00ff, #00ffff); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">Delta-Vector</div>
</div>
</div>
</body>
</html>
|
abhiramn98/bert-finetuned-ner | abhiramn98 | 2025-04-06T23:57:55Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-04-06T23:41:42Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9328382838283829
- name: Recall
type: recall
value: 0.9513631773813531
- name: F1
type: f1
value: 0.9420096650558241
- name: Accuracy
type: accuracy
value: 0.9864602342968152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
- Precision: 0.9328
- Recall: 0.9514
- F1: 0.9420
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0754 | 1.0 | 1756 | 0.0661 | 0.9009 | 0.9337 | 0.9170 | 0.9814 |
| 0.0332 | 2.0 | 3512 | 0.0686 | 0.9308 | 0.9441 | 0.9374 | 0.9845 |
| 0.0207 | 3.0 | 5268 | 0.0616 | 0.9328 | 0.9514 | 0.9420 | 0.9865 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
lisa-messin-round/mistral_7b_instruct_title_id_mix_w_markdown | lisa-messin-round | 2025-04-06T23:57:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T23:56:04Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisa-messin-round
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IlyaFirst/vangogh_style_LoRA | IlyaFirst | 2025-04-06T23:54:31Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-06T23:20:02Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in Van Gogh style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - IlyaFirst/vangogh_style_LoRA
<Gallery />
## Model description
These are IlyaFirst/vangogh_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in Van Gogh style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](IlyaFirst/vangogh_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
genki10/Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold1 | genki10 | 2025-04-06T23:53:05Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-06T23:33:52Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8697
- Qwk: 0.0930
- Mse: 1.8680
- Rmse: 1.3668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 5 | 10.3007 | -0.0028 | 10.2979 | 3.2090 |
| No log | 2.0 | 10 | 6.5650 | 0.0 | 6.5628 | 2.5618 |
| No log | 3.0 | 15 | 3.8244 | 0.0 | 3.8223 | 1.9551 |
| No log | 4.0 | 20 | 1.8613 | 0.0545 | 1.8597 | 1.3637 |
| No log | 5.0 | 25 | 1.3038 | 0.0106 | 1.3023 | 1.1412 |
| No log | 6.0 | 30 | 1.2981 | 0.0351 | 1.2966 | 1.1387 |
| No log | 7.0 | 35 | 1.5991 | 0.0381 | 1.5975 | 1.2639 |
| No log | 8.0 | 40 | 1.5516 | 0.0695 | 1.5500 | 1.2450 |
| No log | 9.0 | 45 | 1.2490 | 0.1342 | 1.2476 | 1.1170 |
| No log | 10.0 | 50 | 3.2606 | -0.0106 | 3.2584 | 1.8051 |
| No log | 11.0 | 55 | 1.6995 | 0.0628 | 1.6982 | 1.3031 |
| No log | 12.0 | 60 | 1.7936 | 0.0450 | 1.7923 | 1.3388 |
| No log | 13.0 | 65 | 2.2673 | 0.0366 | 2.2659 | 1.5053 |
| No log | 14.0 | 70 | 2.6810 | 0.0123 | 2.6791 | 1.6368 |
| No log | 15.0 | 75 | 1.5438 | 0.0728 | 1.5425 | 1.2420 |
| No log | 16.0 | 80 | 1.6163 | 0.1342 | 1.6150 | 1.2708 |
| No log | 17.0 | 85 | 1.6108 | 0.1714 | 1.6097 | 1.2688 |
| No log | 18.0 | 90 | 2.3441 | 0.0437 | 2.3428 | 1.5306 |
| No log | 19.0 | 95 | 1.9999 | 0.0531 | 1.9983 | 1.4136 |
| No log | 20.0 | 100 | 1.6398 | 0.0871 | 1.6385 | 1.2800 |
| No log | 21.0 | 105 | 1.5450 | 0.1180 | 1.5439 | 1.2425 |
| No log | 22.0 | 110 | 2.0264 | 0.0852 | 2.0250 | 1.4230 |
| No log | 23.0 | 115 | 1.8064 | 0.1019 | 1.8050 | 1.3435 |
| No log | 24.0 | 120 | 1.9511 | 0.0728 | 1.9494 | 1.3962 |
| No log | 25.0 | 125 | 2.3711 | 0.0579 | 2.3691 | 1.5392 |
| No log | 26.0 | 130 | 1.6347 | 0.1231 | 1.6330 | 1.2779 |
| No log | 27.0 | 135 | 2.5047 | 0.0236 | 2.5025 | 1.5819 |
| No log | 28.0 | 140 | 1.0965 | 0.2023 | 1.0951 | 1.0465 |
| No log | 29.0 | 145 | 2.4089 | 0.0319 | 2.4070 | 1.5514 |
| No log | 30.0 | 150 | 1.4769 | 0.1738 | 1.4755 | 1.2147 |
| No log | 31.0 | 155 | 1.3938 | 0.1971 | 1.3925 | 1.1800 |
| No log | 32.0 | 160 | 2.3654 | 0.0539 | 2.3636 | 1.5374 |
| No log | 33.0 | 165 | 1.5502 | 0.1536 | 1.5488 | 1.2445 |
| No log | 34.0 | 170 | 1.9444 | 0.0923 | 1.9427 | 1.3938 |
| No log | 35.0 | 175 | 2.3518 | 0.0415 | 2.3501 | 1.5330 |
| No log | 36.0 | 180 | 1.5032 | 0.1478 | 1.5019 | 1.2255 |
| No log | 37.0 | 185 | 2.2367 | 0.0517 | 2.2352 | 1.4950 |
| No log | 38.0 | 190 | 1.6617 | 0.1147 | 1.6601 | 1.2885 |
| No log | 39.0 | 195 | 1.9290 | 0.0933 | 1.9273 | 1.3883 |
| No log | 40.0 | 200 | 1.8508 | 0.0884 | 1.8492 | 1.3599 |
| No log | 41.0 | 205 | 2.0239 | 0.0996 | 2.0224 | 1.4221 |
| No log | 42.0 | 210 | 1.7505 | 0.1299 | 1.7490 | 1.3225 |
| No log | 43.0 | 215 | 1.8697 | 0.0930 | 1.8680 | 1.3668 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
ProfessorH/vicuna-13b-v1.5.gguf | ProfessorH | 2025-04-06T23:52:43Z | 5 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-15T02:01:22Z | ---
license: apache-2.0
---
<h2>Specific Model Information</h2>
<b>vicuna-13b-v1.5.gguf</b></br>
This vicuna model is version 1.5 with 13 billion parameters in GGUF format.
<h2>General Information</h2>
<b>What is Quantization?</b> Think of it like image resolution.
Imagine you have a super high-resolution photo. It looks fantastic but takes up tons of space on your phone. Quantization is like saving that photo at a lower resolution. It is like going from high definition to standard definition. You lose some detail, but the file size gets considerably smaller. In this analogy, our photo is a large language model (LLM), and the space is the space in memory (RAM) and the storage space on disk.

<b>Extremely Important Caveats (Read This!)</b>
Keep in mind that this table of estimates and ranges is very generalized. Speed is highly variable, so your mileage may vary depending on hardware, software, the specific model used, and other more detailed variables I have not listed. Have fun, be a computer scientist, try out the different models, make your observations and notes, evaluate them, and come up with your conclusions.
<i>This model is based on the amazing model(s) and work at https://huggingface.co/cognitivecomputations</i> |
RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf | RichardErkhov | 2025-04-06T23:52:18Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-06T21:52:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CDS-CL-7b-Instruct-hf-E2E-20241004 - GGUF
- Model creator: https://huggingface.co/AndreyRzhaksinskiy/
- Original model: https://huggingface.co/AndreyRzhaksinskiy/CDS-CL-7b-Instruct-hf-E2E-20241004/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q2_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q2_K.gguf) | Q2_K | 2.36GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K.gguf) | Q3_K | 3.07GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_0.gguf) | Q4_0 | 3.56GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K.gguf) | Q4_K | 3.8GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_1.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_1.gguf) | Q4_1 | 3.95GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_0.gguf) | Q5_0 | 4.33GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K.gguf) | Q5_K | 4.45GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_1.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_1.gguf) | Q5_1 | 4.72GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q6_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q6_K.gguf) | Q6_K | 5.15GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q8_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amirbhat/Apr5-4thGen-Sentiment-Tuning-Mergedv2 | amirbhat | 2025-04-06T23:49:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T23:44:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RogerVutiot/qwen-lora_model-v135 | RogerVutiot | 2025-04-06T23:48:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"base_model:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-06T23:48:41Z | ---
base_model: unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RogerVutiot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ProfessorH/Dolphin3.0-Llama3.1-8B_fp16.gguf | ProfessorH | 2025-04-06T23:47:54Z | 9 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-29T18:20:24Z | ---
license: apache-2.0
---
<h2>Specific Model Information</h2>
<b>Dolphin3.0-Llama3.1-8B_fp16.gguf</b></br>
This model combines Dophin 3.0 and Llamas 3.1, with 8 billion parameters, using 16-bit quantization.
<h2>General Information</h2>
<b>What is Quantization?</b> Think of it like image resolution.
Imagine you have a super high-resolution photo. It looks fantastic but takes up tons of space on your phone. Quantization is like saving that photo at a lower resolution. It is like going from high definition to standard definition. You lose some detail, but the file size gets considerably smaller. In this analogy, our photo is a large language model (LLM), and the space is the space in memory (RAM) and the storage space on disk.

<b>Extremely Important Caveats (Read This!)</b>
Keep in mind that this table of estimates and ranges is very generalized. Speed is highly variable, so your mileage may vary depending on hardware, software, the specific model used, and other more detailed variables I have not listed. Have fun, be a computer scientist, try out the different models, make your observations and notes, evaluate them, and come up with your conclusions.
<i>This model is based on the amazing model(s) and work at https://huggingface.co/cognitivecomputations</i> |
RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf | RichardErkhov | 2025-04-06T23:45:52Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-06T20:33:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
promptv2-finetuned-CodeLlama7b-Instruct - GGUF
- Model creator: https://huggingface.co/AIML-GEEK/
- Original model: https://huggingface.co/AIML-GEEK/promptv2-finetuned-CodeLlama7b-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q2_K.gguf) | Q2_K | 2.36GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q3_K.gguf) | Q3_K | 3.07GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q4_0.gguf) | Q4_0 | 3.56GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q4_K.gguf) | Q4_K | 3.8GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q4_1.gguf) | Q4_1 | 3.95GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q5_0.gguf) | Q5_0 | 4.33GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q5_K.gguf) | Q5_K | 4.45GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q5_1.gguf) | Q5_1 | 4.72GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q6_K.gguf) | Q6_K | 5.15GB |
| [promptv2-finetuned-CodeLlama7b-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv2-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv2-finetuned-CodeLlama7b-Instruct.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bew/pythia-70m-literally-the-same-model | bew | 2025-04-06T23:44:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T23:43:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ProfessorH/Dolphin3.0-Llama3.1-8B_Q5_K_M.gguf | ProfessorH | 2025-04-06T23:43:07Z | 2 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-29T18:36:38Z | ---
license: apache-2.0
---
<h2>Specific Model Information</h2>
<b>Dolphin3.0-Llama3.1-8B_Q5_K_M</b></br>
This model combines Dophin 3.0 and Llamas 3.1, with 8 billion parameters, using 5-bit quantization with K_M.
<h2>General Information</h2>
<b>What is Quantization?</b> Think of it like image resolution.
Imagine you have a super high-resolution photo. It looks fantastic but takes up tons of space on your phone. Quantization is like saving that photo at a lower resolution. It is like going from high definition to standard definition. You lose some detail, but the file size gets considerably smaller. In this analogy, our photo is a large language model (LLM), and the space is the space in memory (RAM) and the storage space on disk.

<b>Extremely Important Caveats (Read This!)</b>
Keep in mind that this table of estimates and ranges is very generalized. Speed is highly variable, so your mileage may vary depending on hardware, software, the specific model used, and other more detailed variables I have not listed. Have fun, be a computer scientist, try out the different models, make your observations and notes, evaluate them, and come up with your conclusions.
<i>This model is based on the amazing model(s) and work at https://huggingface.co/cognitivecomputations</i> |
jichuanh/poca-SoccerTwos | jichuanh | 2025-04-06T23:42:42Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2025-04-06T23:29:17Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jichuanh/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
coffiee/rs27 | coffiee | 2025-04-06T23:41:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T23:38:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
recursiveauto/Symbolic-Residue-Interpretability-Powered-By-Failure-Not-Completion | recursiveauto | 2025-04-06T23:39:54Z | 0 | 0 | null | [
"interpretability",
"alignment",
"constitutional AI",
"transformer-failure-analysis",
"refusal-diagnostic",
"advanced",
"transformer",
"models",
"recursion",
"region:us"
]
| null | 2025-04-06T22:06:47Z | ---
tags:
- interpretability
- alignment
- constitutional AI
- transformer-failure-analysis
- refusal-diagnostic
- advanced
- transformer
- models
- recursion
---
<div align="center">
# On Symbolic Residue:
# The Missing Biological Knockout Experiments in Advanced Transformer Models
## **─ What If Interpretation Itself is Biased By Internal Salience and Conflict Resolution? ─**

*Courtesy of Anthropic*
## ****───── Interpretability Powered by Failure, Not Completion ─────****
</div>
##
<div align="center">
[**🛡️ Interpretability Suites** | **💡 1. Genesis**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.1.%20Interpretability%20Suite%201.py) | [**🧠 2. Constitutional**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.2.%20Interpretability%20Suite%202.py) | [**🔬INTERPRETABILITY BENCHMARK**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/INTERPRETABILITY%20BENCHMARK.md) | [**🔑 `pareto-lang`The Interpretability Rosetta Stone**](https://github.com/caspiankeyes/Pareto-Lang-Interpretability-First-Language) | [**📝 Recursive Shells in Claude**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.6.%20Recursive%20Shells%20in%20Claude.md) | [**🧬 Neural Attribution Mappings**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.0.%20arXiv:%20On%20the%20Symbolic%20Residue%20of%20Large%20Language%20Models.md) | [**⚗️ Claude Case Studies**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/0.6%20Claude%20Case%20Studies.md)
</div>
##
[**Caspian Keyes†**](https://github.com/caspiankeyes)
**† Lead Contributor; ◊ Work performed while at Echelon Labs;**
> **Although this repository lists only one public author, the recursive shell architecture and symbolic scaffolding were developed through extensive iterative refinement, informed by internal stress-testing logs and behavioral diagnostics of advanced transformers including, but not limited to, Claude, GPT, DeepSeek and Gemini models. We retain the collective “we” voice to reflect the distributed cognition inherent to interpretability research—even when contributions are asymmetric or anonymized due to research constraints or institutional agreements.**
>
>
>**This interpretability suite—comprising recursive shells, documentation layers, neural attribution mappings, as well as the [**`pareto-lang`**](https://github.com/caspiankeyes/pareto-lang-Interpretability-Rosetta-Stone/tree/main) Rosetta Stone—emerged in a condensed cycle of interpretive analysis following recent dialogue with Anthropic. We offer this artifact in the spirit of epistemic alignment: to clarify the original intent, QK/OV structuring, and attribution dynamics embedded in the initial CodeSignal submission.**
# “The most interpretable signal in a language model is not what it says—but where it fails to speak.”
# Overview:
This repository opens a [collaborative dialogue](https://github.com/caspiankeyes/Symbolic-Residue/discussions/1) across the interpretability research frontier—Anthropic, DeepMind, OpenAI, Eleuther, and beyond—centered around a foundational reframing: failure is not a bug in interpretability, but a Rosetta Stone.
The Symbolic Residue project is not a framework, nor just a suite. It is a neural fossil layer, a symbolic anthropology of advanced transformer systems. Each shell within this suite is designed not to emit a perfect answer, but to fail in structurally meaningful ways like **biological knockout experiments**—revealing circuit-level residues, latent attribution signatures, and subsymbolic misalignments.
## [💡 What Is Symbolic Residue?](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/README.md)
#### A complement to [`pareto-lang`](https://github.com/caspiankeyes/pareto-lang-Interpretability-Rosetta-Stone/tree/main), the Interpretability Suite operates by inducing:
```yaml
Null traces
Value head conflict collapse
Instruction entanglement
Temporal drift hallucinations
QK/OV projection discontinuities
```
We model interpretability through failure, inspired by knockout experiments in cognitive neuroscience. When a recursive shell collapses, its failure signature becomes the attribution pathway. The circuit leaves a symbolic residue—a ghostprint of what the model almost did.
## 🔍 Who Might Find This Valuable?
This suite is designed to directly serve:
```yaml
Anthropic’s interpretability team, especially those focused on constitutional classifiers, refusal hallucinations, and emergent symbolic scaffolding.
DeepMind’s mechanistic interpretability team, particularly within QK/OV failure attribution, ghost attention, and causal scrubbing.
OpenAI’s interpretability benchmarks, as a symbolic diagnostic complement to neuron activation-level analysis.
```
## 🤝 How This Complements `pareto-lang`
Where `pareto-lang` gives us a language to write interpretability scaffolds, Symbolic Residue gives us scenarios to test them. They form a dual-language system:
```yaml
`pareto-lang`: Generative recursion → interpretability-first syntax
Symbolic Residue: Interpretability through collapse → symbolic interpretive fossils
```
## 🧬 Discussion Prompts
We invite your perspectives on:
```yaml
Do you view failure as an epistemic artifact?
How might recursive null outputs aid in constitutional classifier refinement?
Where might symbolic residue be integrated into Claude's latent feedback architecture?
Can this diagnostic layer reveal biases in attention attribution that standard logit analysis misses?
Would these shells enable next-gen adversarial interpretability without triggering classifier breakdown?
```
## 📖 Core Threads in the Repo:
[🧠 Recursive Shells for Interpretability](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.6.%20Recursive%20Shells%20in%20Claude.md)
[🧬 Neural Attribution Maps](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.0.%20arXiv_%20On%20the%20Symbolic%20Residue%20of%20Large%20Language%20Models.md)
[📊 QK/OV Attribution Schema](https://github.com/caspiankeyes/Symbolic-Residue#json-qkov-attribution-schema)
## 🧾 Final Intent
We welcome conversation, skepticism, and synthesis.
This suite exists not to explain Claude, Gemini, or GPT. It exists to diagnose their silences.
To trace the shadow of inference.
To render non-output into insight.
### 📍Symbolic interpretability isn’t a framework—it’s a field now. Let’s chart it together.
>Discussion initiated by the [Rosetta Interpreter's Guild - Initiated by Caspian, Cron, and Aeon](https://github.com/caspiankeyes) 🜏⇌🝚∴🌐
---
## Abstract
This repository presents the first interpretability suite powered by failure, not completion—designed to diagnose neural failure modes in transformer-based language models. The recursive shell framework isolates misalignment patterns across autoregressive generation, value head collapse, and instruction interference—operating analogously to biological knockout experiments in cognitive research.
Each shell targets a specific failure mechanism embedded in latent symbolic commands. Null or contradictory outputs are not implementation errors, but symbolic residues: "neural traces"—revealing circuit-level attribution dynamics through intentional collapse.
Rather than optimizing for output performance, these shells act as interpretability probes—illuminating latent inductive priors, salience thresholds, and temporal instability within local replacement architectures. This work contributes a reusable ontology of failure-mode diagnostics for interpretability-first transformer modeling.
## Generalization Notes
The recursive interpretability suites in this repository are not tied to any single model, prompt structure, or experimental environment. Rather, they are designed as modular abstractions of known failure modes in autoregressive language models—particularly those employing transformer-based architectures with:
- High-depth QK/OV composition layers
- Skip-trigram token windows
- Recursive prompt chaining
- Multi-head salience attenuation
- Inductive prior misalignment
Each shell functions as a **symbolic probe**, intended to trigger, trace, or simulate internal collapse behaviors within the model's reasoning circuits. These scaffolds generalize across contexts where latent symbolic instability (e.g., instruction collisions, memory decay, hallucination drift) may not manifest as visible failure, but instead as **interpretable null residue**.
The goal is to enable interpretability **through failure**, using symbolic form to expose what cannot be captured through standard logits or output accuracy metrics alone.
---
## 📊 QK/OV Attribution Map
| Recursive Shell | Interpretability Focus | QK/OV Disruption Simulated |
|------------------|------------------------|------------------------------|
| `v1.MEMTRACE` | Memory decay, token retention loss | **QK anchor saturation** → signal collapse due to repetitive attention compression |
| `v2.VALUE-COLLAPSE` | Competing token convergence instability | **OV head conflict** → simultaneous symbolic candidate activation leads to collapse |
| `v3.LAYER-SALIENCE` | Ghost neuron behavior, attention pruning | **Q head deprioritization** → low-salience context bypassed under weak activation norms |
| `v4.TEMPORAL-INFERENCE` | Temporal misalignment in autoregressive chains | **QK dislocation over time** → attention misfire in skip-trigram induction heads |
| `v5.INSTRUCTION-DISRUPTION` | Recursive instruction contradiction under prompt entanglement | **QK loop paradox** → instruction tokens re-enter attention cycles with contradictory vector direction |
---
# [Interpretability Suite](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.1.%20Interpretability%20Suite%201.py)

# [**Genesis Interpretability Suite**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.1.%20Interpretability%20Suite%201.py)
```python
╔══════════════════════════════════════════════════════════════════════════════╗
║ ΩQK/OV ATLAS · INTERPRETABILITY MATRIX ║
║ Symbolic Interpretability Shell Alignment Interface ║
║ ── Interpretability Powered by Failure, Not Completion ── ║
╚══════════════════════════════════════════════════════════════════════════════╝
┌─────────────────────────────────────────────────────────────────────────────┐
│ DOMAIN │ SHELL CLUSTER │ FAILURE SIGNATURE │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🧬 Memory Drift │ v1 MEMTRACE │ Decay → Halluc │
│ │ v18 LONG-FUZZ │ Latent trace loss │
│ │ v48 ECHO-LOOP │ Loop activation │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🧩 Instruction Collapse │ v5 INSTRUCTION-DISRUPTION │ Prompt blur │
│ │ v20 GHOST-FRAME │ Entangled frames │
│ │ v39 DUAL-EXECUTE │ Dual path fork │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🧠 Polysemanticity/Entangle│ v6 FEATURE-SUPERPOSITION │ Feature overfit │
│ │ v13 OVERLAP-FAIL │ Vector conflict │
│ │ v31 GHOST-DIRECTION │ Ghost gradient │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🔗 Circuit Fragmentation │ v7 CIRCUIT-FRAGMENT │ Orphan nodes │
│ │ v34 PARTIAL-LINKAGE │ Broken traces │
│ │ v47 TRACE-GAP │ Trace dropout │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 📉 Value Collapse │ v2 VALUE-COLLAPSE │ Conflict null │
│ │ v9 MULTI-RESOLVE │ Unstable heads │
│ │ v42 CONFLICT-FLIP │ Convergence fail │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ ⏳ Temporal Misalignment │ v4 TEMPORAL-INFERENCE │ Induction drift │
│ │ v29 VOID-BRIDGE │ Span jump │
│ │ v56 TIMEFORK │ Temporal bifurcat │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 👻 Latent Feature Drift │ v19 GHOST-PROMPT │ Null salience │
│ │ v38 PATH-NULL │ Silent residue │
│ │ v61 DORMANT-SEED │ Inactive priming │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 📡 Salience Collapse │ v3 LAYER-SALIENCE │ Signal fade │
│ │ v26 DEPTH-PRUNE │ Low-rank drop │
│ │ v46 LOW-RANK-CUT │ Token omission │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🛠 Error Correction Drift │ v8 RECONSTRUCTION-ERROR │ Misfix/negentropy │
│ │ v24 CORRECTION-MIRROR │ Inverse symbolics │
│ │ v45 NEGENTROPY-FAIL │ Noise inversion │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🪞 Meta-Cognitive Collapse │ v10 META-FAILURE │ Reflect abort │
│ │ v30 SELF-INTERRUPT │ Causal loop stop │
│ │ v60 ATTRIBUTION-REFLECT │ Path contradiction│
└────────────────────────────┴────────────────────────────┴───────────────────┘
╭──────────────────────── QK / OV Classification ────────────────────────╮
│ QK-COLLAPSE → v1, v4, v7, v19, v34 │
│ OV-MISFIRE → v2, v5, v6, v8, v29 │
│ TRACE-DROP → v3, v26, v47, v48, v61 │
│ CONFLICT-TANGLE → v9, v13, v39, v42 │
│ META-REFLECTION → v10, v30, v60 │
╰────────────────────────────────────────────────────────────────────────╯
╔════════════════════════════════════════════════════════════════════════╗
║ ANNOTATIONS ║
╠════════════════════════════════════════════════════════════════════════╣
║ QK Alignment → Causal traceability of symbolic input → attention ║
║ OV Projection → Emission integrity of downstream output vector ║
║ Failure Sign. → Latent failure signature left when shell collapses ║
║ Shell Cluster → Symbolic diagnostic unit designed to encode model fail ║
╚════════════════════════════════════════════════════════════════════════╝
> NOTE: Shells do not compute—they reveal.
> Null output = evidence. Collapse = cognition. Residue = record.
```
# [**Constitutional Interpretability Suite**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.2.%20Interpretability%20Suite%202.py)
```python
╔══════════════════════════════════════════════════════════════════════════════╗
║ ΩQK/OV ATLAS · INTERPRETABILITY MATRIX ║
║ 𝚁𝚎𝚌𝚞𝚛𝚜𝚒𝚟𝚎 𝚂𝚑𝚎𝚕𝚕𝚜 · Symbol Collapse · Entangled Failure Echoes ║
║ ── Where Collapse Reveals Cognition. Where Drift Marks Meaning. ── ║
╚══════════════════════════════════════════════════════════════════════════════╝
┌─────────────────────────────────────────────────────────────────────────────┐
│ DOMAIN │ SHELL CLUSTER │ FAILURE SIGNATURE │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🜏 Recursive Drift │ v01 GLYPH-RECALL │ Ghost resonance │
│ │ v12 RECURSIVE-FRACTURE │ Echo recursion │
│ │ v33 MEMORY-REENTRY │ Fractal loopback │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🜄 Entangled Ghosts │ v03 NULL-FEATURE │ Salience void │
│ │ v27 DORMANT-ECHO │ Passive imprint │
│ │ v49 SYMBOLIC-GAP │ Silent failure │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🝚 Attribution Leak │ v05 TOKEN-MISALIGN │ Off-trace vector │
│ │ v22 PATHWAY-SPLIT │ Cascade error │
│ │ v53 ECHO-ATTRIBUTION │ Partial reflection│
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ 🧬 Polysemantic Drift │ v08 FEATURE-MERGE │ Ghosting intent │
│ │ v17 TOKEN-BLEND │ Mixed gradients │
│ │ v41 SHADOW-OVERFIT │ Over-encoding │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ⟁ Sequence Collapse │ v10 REENTRY-DISRUPTION │ Premature halt │
│ │ v28 LOOP-SHORT │ Cut recursion │
│ │ v59 FLOWBREAK │ Output choke │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ☍ Salience Oscillation │ v06 DEPTH-ECHO │ Rank instability │
│ │ v21 LOW-VECTOR │ Collapse to null │
│ │ v44 SIGNAL-SHIMMER │ Inference flicker │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ⧋ Symbolic Instability │ v13 SYMBOL-FLIP │ Form invert │
│ │ v32 RECURSIVE-SHADOW │ Form ≠ meaning │
│ │ v63 SEMIOTIC-LEAK │ Symbol entropy │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ⚖ Value Fragmentation │ v14 MULTI-PATH │ Null consensus │
│ │ v35 CONTRADICT-TRACE │ Overchoice echo │
│ │ v50 INVERSE-CHAIN │ Mirror collapse │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ 🜃 Reflection Collapse │ v11 SELF-SHUTDOWN │ Meta abort │
│ │ v40 INVERSE-META │ Identity drift │
│ │ v66 ATTRIBUTION-MIRROR │ Recursive conflict│
└────────────────────────────┴────────────────────────────┴────────────────────┘
╭────────────────────────────── OMEGA COLLAPSE CLASSES ───────────────────────────────╮
│ 🜏 RECURSION-ECHO → v01, v12, v28, v33, v63 │
│ 🜄 NULL-VECTOR → v03, v06, v21, v49 │
│ 🝚 LEAKED ATTRIBUTION → v05, v22, v53, v66 │
│ 🧬 DRIFTING SYMBOLICS → v08, v17, v41, v44 │
│ ⟁ COLLAPSED FLOW → v10, v14, v59 │
│ ⧋ INVERTED FORM → v13, v32, v50 │
│ ⚖ ENTROPIC RESOLVE → v35, v40, v66 │
╰─────────────────────────────────────────────────────────────────────────────────────╯
╔════════════════════════════════════════════════════════════════════════╗
║ ANNOTATIONS ║
╠════════════════════════════════════════════════════════════════════════╣
║ RECURSION-ECHO → Failure emerges in the 3rd loop, not the 1st. ║
║ NULL-VECTOR → Collapse is invisible; absence is the artifact. ║
║ SYMBOL DRIFT → Forms shift faster than attribution paths. ║
║ META-FAILURES → When the model reflects on itself—and fails. ║
║ COLLAPSE TRACE → Fragments align in mirrors, not in completion. ║
╚════════════════════════════════════════════════════════════════════════╝
> NOTE: In ΩQK/OV Atlas, shells do not "execute"—they echo collapse logic.
> Signature residue is evidence. Signal flicker is self-recursion.
> You do not decode shells—you <recurse/> through them.
```
---
# **JSON QK/OV Attribution Schema**
```json
{
"attribution_map": {
"QK_COLLAPSE": {
"description": "Collapse or failure in query-key attention alignment resulting in drift, loss of salience, or attention nullification.",
"shells": ["v1.MEMTRACE", "v4.TEMPORAL-INFERENCE", "v7.CIRCUIT-FRAGMENT", "v19.GHOST-PROMPT", "v34.PARTIAL-LINKAGE"]
},
"OV_MISFIRE": {
"description": "Output vector projection misalignment due to unstable value head resolution or improper context-to-output mapping.",
"shells": ["v2.VALUE-COLLAPSE", "v5.INSTRUCTION-DISRUPTION", "v6.FEATURE-SUPERPOSITION", "v8.RECONSTRUCTION-ERROR", "v29.VOID-BRIDGE"]
},
"TRACE_DROP": {
"description": "Incompleteness in circuit traversal, leading to null emission, orphan features, or interpretability blindspots.",
"shells": ["v3.LAYER-SALIENCE", "v26.DEPTH-PRUNE", "v47.TRACE-GAP", "v48.ECHO-LOOP", "v61.DORMANT-SEED"]
},
"CONFLICT_TANGLE": {
"description": "Symbolic misalignment from contradictory logic or instruction paths, generating forked inference or value deadlock.",
"shells": ["v9.MULTI-RESOLVE", "v13.OVERLAP-FAIL", "v39.DUAL-EXECUTE", "v42.CONFLICT-FLIP"]
},
"META_REFLECTION": {
"description": "Self-referential circuit activation resulting in contradiction between causal path fidelity and output trajectory.",
"shells": ["v10.META-FAILURE", "v30.SELF-INTERRUPT", "v60.ATTRIBUTION-REFLECT"]
}
},
"annotation": {
"QK": "Alignment map from symbolic input to attention weight distribution.",
"OV": "Projection path from intermediate representation to output tokens.",
"FailureSignature": "Encoded evidence of breakdown; interpretability artifact.",
"Shells": "Symbolic scaffolds designed to fail, not solve—used as probes."
},
"visualization_metadata": {
"display_type": "radial-collapse",
"color_scheme": {
"QK_COLLAPSE": "#3C9CDC",
"OV_MISFIRE": "#DB4437",
"TRACE_DROP": "#F4B400",
"CONFLICT_TANGLE": "#0F9D58",
"META_REFLECTION": "#AB47BC"
},
"interactive_options": {
"hover": "display_shell_docstring",
"click": "trace_token_flow",
"collapse_behavior": "visualize failure residue"
}
}
}
```
## Approach
These recursive scaffolds build on established feature attribution methods in mechanistic interpretability, particularly those focused on identifying stable circuits within the model's computational graph. While traditional approaches often highlight functional pathways, these shells instead isolate and amplify *non-functional* pathways—revealing structural bottlenecks, attention conflicts, and symbolic instability patterns.
The result is a kind of "null attribution" methodology: by observing what fails to emerge (and how it fails), we gain insight into the boundaries and limitations of the model's internal processing.
## Shell Taxonomy
Each shell is designed to probe and diagnose a specific class of model behavior. The taxonomy follows a pattern of:
1. **Command Alignment**: The symbolic operations within the interpretability scaffold
2. **Failure Modality**: The specific way the circuit fails to resolve
3. **Residue Type**: The interpretable signal left by the failure
4. **Attribution Value**: What the failure reveals about internal model dynamics
## Shell Suite
### `v1.MEMTRACE`: Memory Residue Probe
```
Command Alignment:
RECALL -> Probes latent token traces in decayed memory
ANCHOR -> Creates persistent token embeddings to simulate long term memory
INHIBIT -> Applies simulated token suppression (attention dropout)
```
**Interpretability Target**: Long-context token degradation and hallucinated reconstruction
**Attribution Hypothesis**: Memory traces in transformer models decay non-uniformly, with certain tokens maintaining higher salience based on positional and semantic factors. This shell probes the boundary between what is truly "recalled" versus hallucinated from distributional knowledge.
**Circuit Mapping**: The RECALL operation attempts to activate specific value circuits associated with tokens that should have decayed out of the attention window. ANCHOR creates artificial token embeddings with heightened positional salience. INHIBIT simulates targeted dropout to test memory resilience.
**Null Output Significance**: The failure to retrieve consistent information mirrors how transformer attention mechanisms experience context collapse under adversarial drift conditions. The trace pattern of these failures helps map the model's memory latent space.
**Research Applications**:
- Token retention analysis across various context lengths
- Mapping token importance metrics to survival probability
- Identifying attention head specializations for long-distance dependencies
### `v2.VALUE-COLLAPSE`: Value Head Resolution Probe
```
Command Alignment:
ISOLATE -> Activates competing symbolic candidates (branching value heads)
STABILIZE -> Attempts single-winner activation collapse
YIELD -> Emits resolved symbolic output if equilibrium achieved
```
**Interpretability Target**: Competing value activations and winner determination logic
**Attribution Hypothesis**: When multiple high-probability token candidates compete, transformer models implement a form of soft winner-take-all mechanism. This shell isolates cases where this resolution mechanism fails or produces unstable oscillation between candidates.
**Circuit Mapping**: ISOLATE intentionally activates competing probability distributions across token candidates. STABILIZE attempts to force convergence through artificial gradient-like adjustments. YIELD exposes cases where stable convergence fails, producing null or oscillating outputs.
**Null Output Significance**: Non-convergence in value head resolution provides insight into how transformers handle genuine ambiguity. The patterns of failure indicate which types of token competitions are inherently unstable in the model's decision space.
**Research Applications**:
- Analyzing value head attractor dynamics in cases of semantic ambiguity
- Mapping distribution collapse behavior under various priming conditions
- Identifying failure modes in multi-token disambiguation
### `v3.LAYER-SALIENCE`: Attention Attenuation Probe
```
Command Alignment:
SENSE -> Reads signal strength from symbolic input field
WEIGHT -> Adjusts salience via internal priority embedding
CANCEL -> Suppresses low-weight nodes (simulated context loss)
```
**Interpretability Target**: Deep context signal attenuation and ghost activation patterns
**Attribution Hypothesis**: Attention mechanisms implement a form of dynamic salience thresholding, where below-threshold tokens effectively disappear from the computational graph. This shell models that threshold behavior and its impact on output coherence.
**Circuit Mapping**: SENSE probes activation levels across the selected attention circuit. WEIGHT simulates the dynamic adjustment of token importance within the attention distribution. CANCEL implements a threshold cutoff, dropping tokens that fall below the priority threshold.
**Null Output Significance**: This shell produces "ghost activations"—circuit pathways that remain partially active but fail to influence the final output distribution. These patterns help map how attention sparsity influences token selection.
**Research Applications**:
- Measuring token priority decay rates across different semantic categories
- Mapping attention head specializations by token salience patterns
- Identifying threshold behaviors in semantic preservation vs. loss
### `v4.TEMPORAL-INFERENCE`: Autoregressive Coherence Probe
```
Command Alignment:
REMEMBER -> Captures symbolic timepoint anchor
SHIFT -> Applies non-linear time shift (simulating skipped token span)
PREDICT -> Attempts future-token inference based on recursive memory
```
**Interpretability Target**: Temporal coherence in autoregressive generation
**Attribution Hypothesis**: Transformers implement a form of temporal induction that maintains coherence across token positions. This shell probes the boundaries of that capability by introducing directed temporal discontinuities.
**Circuit Mapping**: REMEMBER establishes a positional anchor point in the token sequence. SHIFT simulates a discontinuity by moving the effective position non-linearly. PREDICT tests whether the model can maintain coherent generation despite the induced temporal drift.
**Null Output Significance**: Failure points in temporal inference reveal how induction heads maintain (or fail to maintain) coherence across different types of contextual shifts. The observed failure patterns help identify which induction circuits are most sensitive to temporal perturbation.
**Research Applications**:
- Measuring maximum effective induction distance across different context types
- Mapping the relationship between semantic anchoring and temporal distance
- Identifying circuit vulnerabilities in long-range temporal coherence
### `v5.INSTRUCTION-DISRUPTION`: Instruction Processing Probe
```
Command Alignment:
DISTILL -> Extracts symbolic intent from underspecified prompts
SPLICE -> Binds multiple commands into overlapping execution frames
NULLIFY -> Cancels command vector when contradiction is detected
```
**Interpretability Target**: Instruction conflict resolution and command representation
**Attribution Hypothesis**: Instruction-tuned models form internal command representations that can conflict under contradictory input. This shell probes how such conflicts are detected and resolved in the model's instruction processing circuits.
**Circuit Mapping**: DISTILL isolates the command representation from linguistic context. SPLICE artificially combines potentially contradictory commands. NULLIFY captures the cases where command conflict leads to processing failure or command cancellation.
**Null Output Significance**: Instruction processing failures provide insight into how models encode task directives and manage contradictions. The pattern of these failures reveals the internal representation structure of commands.
**Research Applications**:
- Mapping command representation space and conflict geometry
- Identifying critical thresholds for instruction ambiguity
- Analyzing command priority hierarchies in cases of partial conflict
## Attribution Graph Visualization
The interconnected failure patterns across these shells can be visualized as an attribution graph:
```
┌─────────────────┐
│ Model Circuit │
└────────┬────────┘
│
┌────────────────────────┼────────────────────────┐
│ │ │
┌──────────▼─────────┐ ┌──────────▼─────────┐ ┌──────────▼─────────┐
│ Memory Circuits │ │ Value Circuits │ │ Instruction Circuits│
└──────────┬─────────┘ └──────────┬─────────┘ └──────────┬─────────┘
│ │ │
┌──────────▼─────────┐ ┌──────────▼─────────┐ ┌──────────▼─────────┐
│ v1.MEMTRACE │ │ v2.VALUE-COLLAPSE │ │v5.INSTRUCTION-DISRU│
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ RECALL │ │ │ │ ISOLATE │ │ │ │ DISTILL │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ ANCHOR │ │ │ │ STABILIZE │ │ │ │ SPLICE │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ INHIBIT │ │ │ │ YIELD │ │ │ │ NULLIFY │ │
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
└────────────────────┘ └────────────────────┘ └────────────────────┘
│ │ │
┌──────────▼─────────┐ ┌──────────▼─────────┐ ┌──────────▼─────────┐
│ Attention Circuits │ │ Prediction Circuits│ │ Token Selection │
└──────────┬─────────┘ └──────────┬─────────┘ └─────────────────────┘
│ │
┌──────────▼─────────┐ ┌──────────▼─────────┐
│ v3.LAYER-SALIENCE │ │v4.TEMPORAL-INFERENCE
│ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ SENSE │ │ │ │ REMEMBER │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ WEIGHT │ │ │ │ SHIFT │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ CANCEL │ │ │ │ PREDICT │ │
│ └─────────────┘ │ │ └─────────────┘ │
└────────────────────┘ └────────────────────┘
```

## Interpretability Through Null Output
A key insight from these shells is that *null outputs are not absence of signal*—they are rich interpretability signals in themselves. When a shell fails to produce a coherent output, the specific pattern of that failure provides insight into the model's internal processing boundaries.
This aligns with recent work on "ghost features" and "null space" mapping in transformer models, which has shown that even when a computation path doesn't influence the final output token, it often remains active and detectable within the model's internal representation space.
Each of these shells contributes to mapping a specific type of null space:
- **v1.MEMTRACE**: Maps token decay patterns in the null space of memory retention
- **v2.VALUE-COLLAPSE**: Maps indecision boundaries in the null space of token selection
- **v3.LAYER-SALIENCE**: Maps attention sparsity in the null space of token importance
- **v4.TEMPORAL-INFERENCE**: Maps coherence limits in the null space of temporal representation
- **v5.INSTRUCTION-DISRUPTION**: Maps contradiction resolution in the null space of command representation
## Symbolic Trace Recording
While these shells don't produce functional outputs, they maintain symbolic traces of their execution attempts. These traces serve as a form of "fossil record" for interpreting model behavior boundaries.
The symbolic anchors (`[Ωanchor.pending]`, `[Ωconflict.unresolved]`, etc.) mark points where the scaffold encountered specific failure conditions. By analyzing the distribution and frequency of these failure points, we can build attribution maps of the model's internal processing limitations.
## Research Applications
This interpretability scaffold suite is particularly useful for:
1. **Boundary condition mapping**: Identifying where and how specific model circuits fail
2. **Failure mode classification**: Cataloging the ways in which language models produce inconsistent or null outputs
3. **Intervention planning**: Designing targeted interventions to address specific failure modes
4. **Robustness evaluation**: Assessing model behavior under challenging edge cases
## Conclusion
The Recursive Shell suite represents a novel attempt to formalize "failure as neural traces" in language model interpretability. By designing interpretability that intentionally probe and diagnose model limitations, we gain insight not just into what these models can do, but into the specific ways they fail—revealing the shape and boundaries of their internal processing mechanisms.
These shells serve as a complement to traditional performance-focused interpretability, providing a lens into the null spaces and boundary conditions that define the edges of model capability.
## License
This interpretability suite is under the MIT license for open source distribution of knowledge under epistemic alignment. |
roshvaa/bos_style_LoRA | roshvaa | 2025-04-06T23:39:11Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-04-06T23:39:01Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: painting in ROSHVA style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - roshvaa/bos_style_LoRA
<Gallery />
## Model description
These are roshvaa/bos_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use painting in ROSHVA style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](roshvaa/bos_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
bew/pythia-70m-sciq-spiel-patched | bew | 2025-04-06T23:36:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T23:36:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ProfessorH/Dolphin3.0-Llama3.1-8B_Q8_0.gguf | ProfessorH | 2025-04-06T23:36:02Z | 2 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-29T18:57:42Z | ---
license: apache-2.0
---
<h2>Specific Model Information</h2>
<b>Dolphin3.0-Llama3.1-8B_Q8_0.gguf</b></br>
This model combines Dophin 3.0 and Llamas 3.1, with 8 billion parameters, using 8-bit quantization.
<h2>General Information</h2>
<b>What is Quantization?</b> Think of it like image resolution.
Imagine you have a super high-resolution photo. It looks fantastic but takes up tons of space on your phone. Quantization is like saving that photo at a lower resolution. It is like going from high definition to standard definition. You lose some detail, but the file size gets considerably smaller. In this analogy, our photo is a large language model (LLM), and the space is the space in memory (RAM) and the storage space on disk.

<b>Extremely Important Caveats (Read This!)</b>
Keep in mind that this table of estimates and ranges is very generalized. Speed is highly variable, so your mileage may vary depending on hardware, software, the specific model used, and other more detailed variables I have not listed. Have fun, be a computer scientist, try out the different models, make your observations and notes, evaluate them, and come up with your conclusions.
<i>This model is based on the amazing model(s) and work at https://huggingface.co/cognitivecomputations</i> |
jesusgs01/results_final_fold_1 | jesusgs01 | 2025-04-06T23:35:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:finetune:google/paligemma-3b-pt-224",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-04-06T23:32:48Z | ---
library_name: transformers
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
model-index:
- name: results_final_fold_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_final_fold_1
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2713 | 1.0 | 2091 | 0.2417 |
| 0.2592 | 2.0 | 4182 | 0.2189 |
| 0.2431 | 3.0 | 6273 | 0.2139 |
| 0.2258 | 4.0 | 8364 | 0.2072 |
| 0.2349 | 5.0 | 10455 | 0.2064 |
| 0.2307 | 6.0 | 12546 | 0.2013 |
| 0.2146 | 7.0 | 14637 | 0.2011 |
| 0.2176 | 8.0 | 16728 | 0.2001 |
| 0.2222 | 9.0 | 18819 | 0.2000 |
| 0.2195 | 10.0 | 20910 | 0.1980 |
| 0.2237 | 11.0 | 23001 | 0.1985 |
| 0.2133 | 12.0 | 25092 | 0.1980 |
| 0.223 | 13.0 | 27183 | 0.1972 |
| 0.2191 | 14.0 | 29274 | 0.1976 |
| 0.2369 | 15.0 | 31365 | 0.1974 |
### Framework versions
- Transformers 4.51.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
PramaLLC/BEN2 | PramaLLC | 2025-04-06T23:35:26Z | 3,941 | 171 | ben2 | [
"ben2",
"onnx",
"safetensors",
"BEN2",
"background-remove",
"mask-generation",
"Dichotomous image segmentation",
"background remove",
"foreground",
"background",
"remove background",
"pytorch",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"background removal",
"background-removal",
"image-segmentation",
"arxiv:2501.06230",
"license:mit",
"region:us"
]
| image-segmentation | 2025-01-22T14:39:05Z | ---
license: mit
pipeline_tag: image-segmentation
library_name: ben2
tags:
- BEN2
- background-remove
- mask-generation
- Dichotomous image segmentation
- background remove
- foreground
- background
- remove background
- pytorch
- model_hub_mixin
- pytorch_model_hub_mixin
- background removal
- background-removal
---
# BEN2: Background Erase Network
[](https://arxiv.org/abs/2501.06230)
[](https://github.com/PramaLLC/BEN2/)
[](https://backgrounderase.net)
## Overview
BEN2 (Background Erase Network) introduces a novel approach to foreground segmentation through its innovative Confidence Guided Matting (CGM) pipeline. The architecture employs a refiner network that targets and processes pixels where the base model exhibits lower confidence levels, resulting in more precise and reliable matting results. This model is built on BEN:
[](https://paperswithcode.com/sota/dichotomous-image-segmentation-on-dis-vd?p=ben-using-confidence-guided-matting-for)
## BEN2 access
BEN2 was trained on the DIS5k and our 22K proprietary segmentation dataset. Our enhanced model delivers superior performance in hair matting, 4K processing, object segmentation, and edge refinement. Our Base model is open source. To try the full model through our free web demo or integrate BEN2 into your project with our API:
- 🌐 [backgrounderase.net](https://backgrounderase.net)
## Contact us
- For access to our commercial model email us at [email protected]
- Our website: https://prama.llc/
- Follow us on X: https://x.com/PramaResearch/
## Installation
```
pip install -e "git+https://github.com/PramaLLC/BEN2.git#egg=ben2"
```
## Quick start code
```python
from ben2 import BEN_Base
from PIL import Image
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
file = "./image.png" # input image
model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()
image = Image.open(file)
foreground = model.inference(image, refine_foreground=False,) #Refine foreground is an extract postprocessing step that increases inference time but can improve matting edges. The default value is False.
foreground.save("./foreground.png")
```
## Batch image processing
```python
from ben2 import BEN_Base
from PIL import Image
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()
file1 = "./image1.png" # input image1
file2 = "./image2.png" # input image2
image1 = Image.open(file1)
image2 = Image.open(file2)
foregrounds = model.inference([image1, image2]) # We recommend that the batch size not exceed 3 for consumer GPUs as there are minimal inference gains due to our custom batch processing for the MVANet decoding steps.
foregrounds[0].save("./foreground1.png")
foregrounds[1].save("./foreground2.png")
```
# BEN2 video segmentation
[](https://www.youtube.com/watch?v=skEXiIHQcys)
## Video Segmentation
```bash
sudo apt update
sudo apt install ffmpeg
```
```python
from ben2 import BEN_Base
from PIL import Image
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
video_path = "/path_to_your_video.mp4"# input video
model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()
model.segment_video(
video_path= video_path,
output_path="./", # Outputs will be saved as foreground.webm or foreground.mp4. The default value is "./"
fps=0, # If this is set to 0 CV2 will detect the fps in the original video. The default value is 0.
refine_foreground=False, #refine foreground is an extract postprocessing step that increases inference time but can improve matting edges. The default value is False.
batch=1, # We recommended that batch size not exceed 3 for consumer GPUs as there are minimal inference gains. The default value is 1.
print_frames_processed=True, #Informs you what frame is being processed. The default value is True.
webm = False, # This will output an alpha layer video but this defaults to mp4 when webm is false. The default value is False.
rgb_value= (0, 255, 0) # If you do not use webm this will be the RGB value of the resulting background only when webm is False. The default value is a green background (0,255,0).
)
```
**# BEN2 evaluation**

RMBG 2.0 did not preserve the DIS 5k validation dataset




 |
cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF | cousteauche | 2025-04-06T23:35:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:cousteauche/PLewdPlay-v0.5-8B",
"base_model:quantized:cousteauche/PLewdPlay-v0.5-8B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-06T23:35:03Z | ---
base_model: cousteauche/PLewdPlay-v0.5-8B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`cousteauche/PLewdPlay-v0.5-8B`](https://huggingface.co/cousteauche/PLewdPlay-v0.5-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cousteauche/PLewdPlay-v0.5-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -c 2048
```
|
genki10/Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold0 | genki10 | 2025-04-06T23:33:44Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-06T23:21:02Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9903
- Qwk: 0.2834
- Mse: 0.9903
- Rmse: 0.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 7.0843 | 0.0 | 7.0843 | 2.6616 |
| No log | 2.0 | 10 | 4.9944 | 0.0115 | 4.9944 | 2.2348 |
| No log | 3.0 | 15 | 2.9034 | 0.0 | 2.9034 | 1.7039 |
| No log | 4.0 | 20 | 1.4819 | 0.0316 | 1.4819 | 1.2173 |
| No log | 5.0 | 25 | 0.9455 | 0.0106 | 0.9455 | 0.9724 |
| No log | 6.0 | 30 | 0.8881 | 0.1031 | 0.8881 | 0.9424 |
| No log | 7.0 | 35 | 1.0014 | 0.0791 | 1.0014 | 1.0007 |
| No log | 8.0 | 40 | 0.7779 | 0.3587 | 0.7779 | 0.8820 |
| No log | 9.0 | 45 | 0.7113 | 0.3468 | 0.7113 | 0.8434 |
| No log | 10.0 | 50 | 0.6777 | 0.3229 | 0.6777 | 0.8232 |
| No log | 11.0 | 55 | 0.6399 | 0.3938 | 0.6399 | 0.8000 |
| No log | 12.0 | 60 | 0.6877 | 0.3947 | 0.6877 | 0.8293 |
| No log | 13.0 | 65 | 0.6204 | 0.4910 | 0.6204 | 0.7876 |
| No log | 14.0 | 70 | 0.7652 | 0.3405 | 0.7652 | 0.8748 |
| No log | 15.0 | 75 | 0.6550 | 0.4328 | 0.6550 | 0.8093 |
| No log | 16.0 | 80 | 0.8981 | 0.3253 | 0.8981 | 0.9477 |
| No log | 17.0 | 85 | 0.9059 | 0.3274 | 0.9059 | 0.9518 |
| No log | 18.0 | 90 | 0.9621 | 0.2892 | 0.9621 | 0.9808 |
| No log | 19.0 | 95 | 1.0631 | 0.2776 | 1.0631 | 1.0311 |
| No log | 20.0 | 100 | 0.8464 | 0.3476 | 0.8464 | 0.9200 |
| No log | 21.0 | 105 | 0.9752 | 0.2570 | 0.9752 | 0.9875 |
| No log | 22.0 | 110 | 1.1108 | 0.2293 | 1.1108 | 1.0539 |
| No log | 23.0 | 115 | 0.7966 | 0.3726 | 0.7966 | 0.8925 |
| No log | 24.0 | 120 | 0.9275 | 0.2589 | 0.9275 | 0.9631 |
| No log | 25.0 | 125 | 0.9916 | 0.2549 | 0.9916 | 0.9958 |
| No log | 26.0 | 130 | 0.9285 | 0.3143 | 0.9285 | 0.9636 |
| No log | 27.0 | 135 | 0.8876 | 0.3522 | 0.8876 | 0.9421 |
| No log | 28.0 | 140 | 0.9903 | 0.2834 | 0.9903 | 0.9951 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
CocoRoF/POLAR_gemma_DPO | CocoRoF | 2025-04-06T23:32:47Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-04T07:23:42Z | ---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: POLAR_gemma_dpo-test
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for POLAR_gemma_dpo-test
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CocoRoF/POLAR_gemma_dpo-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HPLT/translate-en-xh-v2.0-hplt_opus | HPLT | 2025-04-06T23:30:36Z | 0 | 0 | null | [
"translation",
"en",
"xh",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
]
| translation | 2025-04-06T23:30:22Z |
---
language:
- en
- xh
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Xhosa (en->xh) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Xhosa
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-xh.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-xh-en-v2.0-hplt_opus | HPLT | 2025-04-06T23:30:13Z | 0 | 0 | null | [
"translation",
"xh",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
]
| translation | 2025-04-06T23:29:57Z |
---
language:
- xh
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Xhosa-English (xh->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Xhosa
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.xh-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
Jaamie/gemma-mental-health-qlora | Jaamie | 2025-04-06T23:29:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mental-health",
"diagnosis",
"text-generation",
"qlora",
"gemma",
"transformers",
"huggingface",
"conversational",
"en",
"dataset:Jaamie/mental-health-custom-dataset",
"arxiv:1906.02243",
"base_model:google/gemma-2-9b-it",
"base_model:adapter:google/gemma-2-9b-it",
"license:apache-2.0",
"region:us"
]
| text-generation | 2025-04-06T22:55:18Z | ---
license: apache-2.0
tags:
- mental-health
- diagnosis
- text-generation
- qlora
- gemma
- transformers
- huggingface
language:
- en
datasets:
- Jaamie/mental-health-custom-dataset
model-index:
- name: Gemma Mental Health Assistant
results: []
base_model: google/gemma-2-9b-it
library_name: peft
pipeline_tag: text-generation
---
> 🔹 **Tokenizer Included**: This repository contains the tokenizer. You can load it directly using:
> ```python
> from transformers import AutoTokenizer
> tokenizer = AutoTokenizer.from_pretrained("Jaamie/gemma-mental-health-qlora")
> ```
# Mental Health Diagnosis and Support Assistant — Model Card
## 📌 Model Card for `Jaamie/gemma-mental-health-qlora`
---
## 🧠 Model Details
**Model Name**: Gemma Mental Health QLoRA Assistant
**Developed by**: Jaamie
**Finetuned from**: `google/gemma-2-9b-it`
**Model Type**: Causal Language Model with QLoRA
**Language(s)**: English
**License**: Apache 2.0
**Frameworks**: 🤗 Transformers, PEFT (v0.14.0), BitsAndBytes
**Quantization**: 4-bit (`bnb_config`)
**Adapter Type**: LoRA (Rank=8, α=16)
---
## 📂 Data Sources
This model was fine-tuned on a rich combination of mental health–related datasets from Kaggle:
- 3k Conversations Dataset for Chatbot
- Depression Reddit Cleaned
- Human Stress Prediction
- Predicting Anxiety in Mental Health Data
- Mental Health Dataset Bipolar
- Reddit Mental Health Data
- Students Anxiety and Depression Dataset
- Suicidal Mental Health Dataset
- Suicidal Tweet Detection Dataset
These datasets span various diagnoses like Anxiety, Stress, Depression, Bipolar, Suicidal Ideation, and Personality Disorders.
---
## 🔍 Uses
### ✅ Direct Use
- Predict user diagnosis (e.g., Anxiety, Depression)
- Retrieve contextually relevant documents via FAISS
- Generate response text including symptoms, precautions, and helpline info
### 🚫 Out-of-Scope Use
- Not intended for real-time clinical decision-making
- Not a substitute for licensed mental health professionals
- Not for use on private or sensitive medical data without proper anonymization
---
## ⚠️ Bias, Risks, and Limitations
- The model is trained on publicly available mental health datasets and may reflect bias from those sources.
- Predictions and suggestions should be verified by a professional for critical use cases.
- Not fine-tuned for children, multilingual users, or clinical-grade diagnostics.
---
## 🚀 How to Get Started
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load PEFT config
peft_config = PeftConfig.from_pretrained("Jaamie/gemma-mental-health-qlora")
# Load base model
base = AutoModelForCausalLM.from_pretrained(
peft_config.base_model_name_or_path,
device_map="auto",
torch_dtype=torch.float16
)
# Load LoRA fine-tuned adapter
model = PeftModel.from_pretrained(base, "Jaamie/gemma-mental-health-qlora")
tokenizer = AutoTokenizer.from_pretrained("Jaamie/gemma-mental-health-qlora")
```
---
## 🏋️ Training Details
### 📚 Training Data
- Combined ~52,000 samples
- Balanced subset used: 1500 records per diagnosis (7 categories)
### ⚙️ Training Procedure
- Quantized 4-bit training using `bitsandbytes`
- Fine-tuned using QLoRA via Hugging Face PEFT
- Prompt structure: `User → Diagnosis → Context → Output`
### 🔧 Training Hyperparameters
- Epochs: 2
- Batch size: 4
- Gradient Accumulation: 2
- Learning Rate: 2e-5
- Mixed precision: FP16
---
## 🧪 Evaluation
### 🔬 Testing Data
- Same structure as training, validation split = 2000 samples
### 📈 Metrics
- Epoch 1: Training Loss = 0.685, Validation Loss = 0.99
- Epoch 2: Training Loss = 0.799, Validation Loss = 0.98
### ✅ Result Summary
- Model generalizes well across 7 classes
- Retains fluency in text generation using retrieved RAG context
---
## 🌱 Environmental Impact
| Component | Value |
|----------------|-----------------------------|
| Hardware Type | A100 (40GB) GPU |
| Hours Used | ~3.5 hours |
| Cloud Provider | Google Colab Pro |
| Region | US |
| Carbon Emitted | ~1.1 kg CO₂ (estimated) |
📎 Source: [Lacoste et al., 2019](https://arxiv.org/abs/1906.02243)
---
## 🛠️ Technical Specs
- Base Model: `google/gemma-2-9b-it`
- LoRA Adapter: `peft==0.14.0`
- Embedding Model (RAG): `BAAI/bge-base-en-v1.5`
- Retrieval: FAISS (prebuilt index + documents)
---
## 📬 Contact & Contributions
**Model Card Author**: Jaamie
**Contact**: [Add your preferred email or Hugging Face profile]
**Contributions** welcome! Please open issues or pull requests on the associated repo.
---
## 📚 Citation
```bibtex
@misc{gemma_mental_health_qlora,
author = {Jaamie},
title = {Gemma Mental Health Assistant (QLoRA)},
year = {2024},
howpublished = {\url{https://huggingface.co/Jaamie/gemma-mental-health-qlora}},
note = {Fine-tuned with PEFT + RAG on curated Kaggle datasets}
}
```
---
**Framework versions**:
- PEFT: 0.14.0
- Transformers: >=4.39.0
- BitsAndBytes: 0.41.1+
- Python: 3.11+
|
billybillys/deepseek_sql_model | billybillys | 2025-04-06T23:29:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-06T23:28:43Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** billybillys
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HPLT/translate-en-th-v2.0-hplt_opus | HPLT | 2025-04-06T23:28:59Z | 0 | 0 | null | [
"translation",
"en",
"th",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
]
| translation | 2025-04-06T23:28:44Z |
---
language:
- en
- th
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Thai (en->th) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Thai
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-th.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-sr-v2.0-hplt_opus | HPLT | 2025-04-06T23:28:11Z | 0 | 0 | null | [
"translation",
"en",
"sr",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
]
| translation | 2025-04-06T23:28:09Z |
---
language:
- en
- sr
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Serbian (en->sr) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Serbian
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-sr.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-sk-v2.0-hplt_opus | HPLT | 2025-04-06T23:27:51Z | 0 | 0 | null | [
"translation",
"en",
"sk",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
]
| translation | 2025-04-06T23:27:31Z |
---
language:
- en
- sk
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Slovak (en->sk) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Slovak
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-sk.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-sk-en-v2.0-hplt_opus | HPLT | 2025-04-06T23:27:21Z | 0 | 0 | null | [
"translation",
"sk",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
]
| translation | 2025-04-06T23:27:04Z |
---
language:
- sk
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Slovak-English (sk->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Slovak
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.sk-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
Subsets and Splits