modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 00:40:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 00:36:54
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
memeviss/zombieVIII_9 | memeviss | 2025-05-03T11:23:25Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-03T10:03:45Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
ButterChicken98/soy_ab_early_drembooth | ButterChicken98 | 2025-05-03T11:19:27Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-03T10:09:28Z | ---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: A photo of a sks leaf
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - ButterChicken98/soy_ab_early_drembooth
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on A photo of a sks leaf using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Out-Lofara-Video/Out.Lofara.Viral.Video.Link | Out-Lofara-Video | 2025-05-03T11:15:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T11:14:51Z | Out Lofara Original Video V๐ขral Video L๐aแดed on X social media platforms
<a href="https://mswds.xyz/full-video/?v=Out-Lofara " rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a>
<a href="https://mswds.xyz/full-video/?v=Out-Lofara" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ Viral ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
<a href="https://mswds.xyz/full-video/?v=Out-Lofara"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
Actor Out Lofara Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Out Lofara , a young and talented digital creator, recently became famous thanks to this interesting video.
L๐aแดed Video Actor Out Lofara Original Video V๐ขral Video L๐aแดed on X Twitter
Actor Out Lofara Original Video video oficial twitter
L๐aแดed Video Actor Out Lofara Original Video V๐ขral Video L๐aแดed on X Twitter.
|
mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF | mradermacher | 2025-05-03T11:12:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:qsy71/none_quantization_medical_Gemma-1.1-7B-Chat",
"base_model:quantized:qsy71/none_quantization_medical_Gemma-1.1-7B-Chat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T09:31:09Z | ---
base_model: qsy71/none_quantization_medical_Gemma-1.1-7B-Chat
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/qsy71/none_quantization_medical_Gemma-1.1-7B-Chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q2_K.gguf) | Q2_K | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q3_K_S.gguf) | Q3_K_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q5_K_S.gguf) | Q5_K_S | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q5_K_M.gguf) | Q5_K_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q6_K.gguf) | Q6_K | 7.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/none_quantization_medical_Gemma-1.1-7B-Chat-GGUF/resolve/main/none_quantization_medical_Gemma-1.1-7B-Chat.f16.gguf) | f16 | 17.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
roshanrb001/lora_model_gemma3_un | roshanrb001 | 2025-05-03T11:10:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T11:10:20Z | ---
base_model: unsloth/gemma-3-4b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** roshanrb001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dharil/Combined-labelled-psudeo-labelled-data | Dharil | 2025-05-03T11:10:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T11:09:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shivampr1001/Emo0.1 | shivampr1001 | 2025-05-03T11:08:50Z | 0 | 1 | keras | [
"keras",
"image-classification",
"computer-vision",
"emotion-detection",
"facial-expression",
"fer2013",
"real-time",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-04-25T09:47:57Z | ---
license: apache-2.0
tags:
- keras
- image-classification
- computer-vision
- emotion-detection
- facial-expression
- fer2013
- real-time
model-index:
- name: Emo0.1 - Facial Emotion Recognition
results: []
metrics:
- accuracy
---
# Emo0.1 โ Facial Emotion Recognition (FER2013)
**Emo0.1** is a lightweight TensorFlow/Keras model fine-tuned on VGG16 to classify 7 facial emotions from the [FER2013 dataset](https://www.kaggle.com/datasets/msambare/fer2013).
## Model Details
- **Input**: 48x48 RGB face images
- **Output**: `['angry', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised']`
- **Framework**: TensorFlow 2.x
- **Training**: 30 epochs with Adam (lr=0.0001)
## Quick Start
### Installation
```python
pip install tensorflow opencv-python
```
Load Model
``` bash
from huggingface_hub import hf_hub_download
from tensorflow.keras.models import load_model
model_path = hf_hub_download(repo_id="shivampr1001/Emo0.1", filename="Emo0.1.h5")
model = load_model(model_path)
```
Predict from Image
```
import cv2
import numpy as np
def predict_emotion(img_path):
img = cv2.imread(img_path)
img = cv2.resize(img, (48,48)).astype('float32')/255
pred = model.predict(np.expand_dims(img, axis=0))[0]
return ['angry','disgusted','fearful','happy','neutral','sad','surprised'][np.argmax(pred)]
```
| File | Purpose |
| ------------------------------------- | ------------------------------- |
| `facial_EmotionClassifer.h5` | Pre-trained Keras model |
| `RealTimeClassification.py` | Webcam-based emotion prediction |
| `predictBY_img.py` | Image-based emotion prediction |
| `haarcascade_frontalface_default.xml` | Haar cascade for face detection |
License Apache 2.0 ยฉ [Shivam Prasad](https://github.com/shivamprasad1001) |
XzWang/ruozhiReasoner-Qwen3-4B | XzWang | 2025-05-03T11:05:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T11:02:54Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/huihui-ai_Qwen3-8B-abliterated-Q4_K_M-GGUF | Triangle104 | 2025-05-03T11:05:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Qwen3-8B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T11:05:14Z | ---
base_model: huihui-ai/Qwen3-8B-abliterated
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Qwen3-8B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen3-8B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-8B-abliterated-Q4_K_M-GGUF --hf-file qwen3-8b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-8B-abliterated-Q4_K_M-GGUF --hf-file qwen3-8b-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-8B-abliterated-Q4_K_M-GGUF --hf-file qwen3-8b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-8B-abliterated-Q4_K_M-GGUF --hf-file qwen3-8b-abliterated-q4_k_m.gguf -c 2048
```
|
nmdsagbja/TRENDING.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.Tutorial | nmdsagbja | 2025-05-03T11:03:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T11:03:36Z | Sajal Malik Original Video V๐ขral Video L๐aแดed on X social media platforms
<a href="https://mswds.xyz/full-video/?v=Sajal-Malik " rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a>
<a href="https://mswds.xyz/full-video/?v=Sajal-Malik" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ Viral ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
<a href="https://mswds.xyz/full-video/?v=Sajal-Malik"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
Actor Sajal Malik Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Sajal Malik , a young and talented digital creator, recently became famous thanks to this interesting video.
L๐aแดed Video Actor Sajal Malik Original Video V๐ขral Video L๐aแดed on X Twitter
Actor Sajal Malik Original Video video oficial twitter
L๐aแดed Video Actor Sajal Malik Original Video V๐ขral Video L๐aแดed on X Twitter.
|
eliza555beth2002/DeepSeekText2SQLOvertrained | eliza555beth2002 | 2025-05-03T11:01:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T10:59:49Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** eliza555beth2002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kate1130/kluebert-GPT-f1-bullying-classifier | kate1130 | 2025-05-03T11:00:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T10:34:34Z | ---
library_name: transformers
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: kluebert-GPT-f1-bullying-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kluebert-GPT-f1-bullying-classifier
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5276
- F1: 0.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3644 | 1.0 | 325 | 0.3000 | 0.9028 |
| 0.0891 | 2.0 | 650 | 0.4418 | 0.8889 |
| 0.0285 | 3.0 | 975 | 0.5276 | 0.8938 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
harman/gemma2-9b_ultrafeedback-CARMA-paraphrase_neutrals_pairpm | harman | 2025-05-03T10:58:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-03T10:51:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
appishi/gensyn-checkpoints-sprightly_hibernating_antelope | appishi | 2025-05-03T10:57:25Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am sprightly hibernating antelope",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T01:01:55Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-sprightly_hibernating_antelope
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am sprightly hibernating antelope
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-sprightly_hibernating_antelope
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="appishi/gensyn-checkpoints-sprightly_hibernating_antelope", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
deeponh/mal_9b_9b_L2 | deeponh | 2025-05-03T10:55:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:53:15Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Atnafu/eng-amh-norm-nllb_600M_eng2tir-un | Atnafu | 2025-05-03T10:54:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-03T10:50:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Geraldine/FineQwen2.5-0.5B-Instruct-sft-unimarc | Geraldine | 2025-05-03T10:52:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T10:51:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ftfyhh/wan_1.3b_lora_pnts_drop | Ftfyhh | 2025-05-03T10:51:56Z | 0 | 1 | null | [
"region:us"
] | null | 2025-05-03T10:38:41Z | wan 1.3b i2v vace workflow: https://github.com/Mozer/comfy_stuff/blob/main/workflows/wan_vace_1.3b_i2v_and_lora.json
prompt: `panties drop. woman is slowly taking off her shorts` |
BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7 | BootesVoid | 2025-05-03T10:49:34Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-03T10:49:32Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MADISON
---
# Cma81Qe6M021Anega4Jzv89Wx_Cma82F654021Mnega5Rmnvih7
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MADISON` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MADISON",
"lora_weights": "https://huggingface.co/BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7', weight_name='lora.safetensors')
image = pipeline('MADISON').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cma81qe6m021anega4jzv89wx_cma82f654021mnega5rmnvih7/discussions) to add images that show off what youโve made with this LoRA.
|
ilyass31/Vulnera_Scan | ilyass31 | 2025-05-03T10:46:05Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"grpo",
"text-classification",
"en",
"dataset:google/code_x_glue_cc_defect_detection",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-14T08:04:33Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
license: apache-2.0
datasets:
- google/code_x_glue_cc_defect_detection
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-classification
---
# Model Card for Vulnera-Scan
<!-- Provide a quick summary of what the model is/does. -->
### Model Details
#### Model Description
The **Vulnera_Scan** model is a transformer-based model fine-tuned for detecting vulnerabilities in programming code. It leverages pre-trained weights from **meta-llama/Llama-3.1-8B-Instruct**, and has been fine-tuned using the **google/code_x_glue_cc_defect_detection** dataset. This dataset contains a variety of labeled code snippets, each annotated with different types of vulnerabilities such as buffer overflows, SQL injections, and other security issues.
The model performs **feature extraction** on code snippets and classifies them based on whether they exhibit vulnerable behavior. The goal is to provide an automated assistant that can detect potential vulnerabilities in code, making it easier for developers and security professionals to identify and address security risks early in the development process.
- **Model Type:** GRPO-based model for defect detection
- **Base Model:** meta-llama/Llama-3.1-8B-Instruct
- **Pipeline Tag:** feature-extraction
- **Fine-tuned from:** meta-llama/Llama-3.1-8B-Instruct
- **License:** Boost Software License 1.0
- **Primary Task:** Vulnerability detection in code
- **Training Dataset:** google/code_x_glue_cc_defect_detection
- **Language(s):** English (Programming code in English)
- **Metrics Used:** Accuracy
#### Model Architecture
The model uses a variant of the transformer architecture, which has been fine-tuned for vulnerability detection in code. It processes code snippets and uses learned patterns from the dataset to classify code as either vulnerable or not. The model uses **GRPO (Generalized Reinforcement Pretrained Optimization)**, which helps the model better adapt to the task of defect detection, providing improved performance compared to standard fine-tuning techniques.
The model employs a **tokenization approach** that is well-suited for code, converting the raw code into tokenized representations that are then processed by the transformer architecture. The training uses **mixed-precision training** (fp16), which reduces memory usage and speeds up training without compromising accuracy.
#### Performance
The model has been evaluated on the **google/code_x_glue_cc_defect_detection** dataset, where it showed promising results with an accuracy of **85%** on detecting vulnerabilities across various code types. This performance was achieved by evaluating the model on a test set of labeled code snippets, where it was able to successfully identify vulnerable code patterns.
#### Practical Use Cases
- **Code Security Analysis:** Automatically detecting potential security vulnerabilities within code to ensure that software is secure before deployment.
- **Developer Assistance:** Helping developers identify areas in their codebase that could be potential vulnerabilities, reducing manual inspection time and improving overall code security.
- **Automated Code Reviews:** Integrating the model into code review processes to flag potential vulnerabilities as part of the continuous integration (CI) pipeline.
---
This detailed version of the **Model Details** section should provide comprehensive information about the model's description, architecture, performance, and use cases. If there are any specific additions or modifications you want, feel free to let me know!
### Model Description
The Vulnera_Scan model is a fine-tuned transformer-based model developed for detecting vulnerabilities in programming code. Fine-tuned from meta-llama/Llama-3.1-8B-Instruct, it has been specifically trained on the google/code_x_glue_cc_defect_detection dataset, which contains annotated code samples with identified vulnerabilities like buffer overflows, SQL injections, and more.
This model is designed to classify code snippets based on their vulnerability, helping developers and security experts automatically detect and address potential security risks within their software. It performs feature extraction and classification tasks, leveraging the power of the GRPO (Generalized Reinforcement Pretrained Optimization) framework for better optimization and performance in defect detection tasks.
The model operates by processing raw code into tokenized representations, then using its pre-trained transformer architecture to predict the presence of vulnerabilities with high accuracy. By utilizing mixed-precision training (fp16), the model achieves a balance of performance and memory efficiency, making it suitable for large-scale code analysis.
- **Developed by:** [ilyas DAHAOUI]
- **Model type:** [Transformer-based Model for Code Vulnerability Detection]
- **Language(s) (NLP):** [(Code-based vulnerabilities in various programming languages like Python, C++, JavaScript, etc.]
- **License:** [MIT License]
- **Finetuned from model [ meta-llama/Llama-3.1-8B-Instruct]:** [GRPO]
-
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[This model is designed to identify potential vulnerabilities in code by analyzing programming syntax, logic, and structure. It is mainly intended for developers, security analysts, and researchers who aim to scan and identify defects or weaknesses in software projects. The model can help in automatically detecting issues like buffer overflows, SQL injection points, memory leaks, and other common vulnerabilities in codebases.
For direct use, the model can be used to:
Analyze code snippets or entire codebases to identify potential security flaws.
Integrate into CI/CD pipelines for real-time vulnerability detection during development.
Provide developers with actionable feedback and recommendations on how to fix the issues identified]
### Downstream Use
[This model can be fine-tuned further for specific use cases or integrated into larger security frameworks, such as:
Code review tools that focus on security vulnerability detection.
Automated testing frameworks for software applications.
Security audit tools used by organizations to assess code security.
This could benefit enterprises looking to improve their software's security posture and protect against vulnerabilities before they reach production environments.]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[The model is not designed for:
General-purpose code generation or other tasks like coding style or optimization recommendations.
Detecting non-security-related bugs or errors in code, such as performance bottlenecks.
Use cases where extremely high precision or domain-specific vulnerability types are required, as further fine-tuning may be necessary.
The model should not be used to mislead or manipulate software security in malicious ways. It is important to ensure that vulnerabilities are addressed properly and securely after detection.]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[The model was fine-tuned on the Google CodeXGlue Defect Detection dataset, a part of the CodeXGlue benchmark. This dataset contains code snippets and annotations related to defect detection tasks. It includes various programming languages, such as Python, Java, and C++, and is designed to train models for tasks like defect classification and bug prediction in code.]
#### Training Hyperparameters
- **Training regime**: fp16 mixed precision
- The model was fine-tuned using **16-bit mixed precision (fp16)** training. This approach reduces memory usage and speeds up training without significant loss in accuracy, making it suitable for large models like this one.
- **Learning rate**: 5e-6
- A low learning rate of `5e-6` was used to ensure smooth convergence while avoiding overfitting.
- **Batch size**: 1
- A batch size of `1` was used due to GPU memory limitations. To simulate larger batch sizes, gradient accumulation was applied.
- **Optimizer**: paged_adamw_8bit
- The AdamW optimizer with **8-bit precision** was used to further optimize memory efficiency while maintaining stable training.
- **Gradient accumulation**: 4
- Gradient accumulation was performed over 4 steps to simulate larger batch sizes and avoid running out of GPU memory.
- **Warmup ratio**: 0.1
- A warmup ratio of **0.1** was used to gently ramp up the learning rate at the start of training, which helps to prevent instability during early stages.
- **Learning rate scheduler**: cosine
- A **cosine learning rate scheduler** was used to smoothly decrease the learning rate over the course of training, helping the model converge effectively.
- **Max gradient norm**: 1.0
- **Gradient clipping** was applied with a max gradient norm of `1.0` to avoid issues with exploding gradients during training.
- **Number of epochs**: 1
- The model was trained for **1 epoch** to quickly validate its performance and adjust hyperparameters based on available computational resources.
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Model Card Contact
For inquiries or more information about this model, please contact:
- **Author**: [ilyas dahaoui]
- **Email**: [[email protected]]
If you have any technical questions, issues, or would like to collaborate, feel free to reach out via the above channels. |
harman/gemma2-9b_ultrafeedback-RRM_pairpm | harman | 2025-05-03T10:44:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-03T10:36:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
arkitex/wav2vec2-finetune-authentic-and-synthetic | arkitex | 2025-05-03T10:44:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-28T20:13:15Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec2-finetune-authentic-and-synthetic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: en
split: None
args: en
metrics:
- name: Wer
type: wer
value: 0.3096307872631373
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-finetune-authentic-and-synthetic
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4856
- Wer: 0.3096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.0567 | 0.0548 | 500 | 2.9945 | 1.0 |
| 2.5069 | 0.1097 | 1000 | 1.4524 | 0.7744 |
| 0.94 | 0.1645 | 1500 | 1.0778 | 0.6329 |
| 0.7146 | 0.2194 | 2000 | 0.9130 | 0.5607 |
| 0.6139 | 0.2742 | 2500 | 0.9247 | 0.5226 |
| 0.5694 | 0.3291 | 3000 | 0.8565 | 0.5149 |
| 0.5277 | 0.3839 | 3500 | 0.8115 | 0.4786 |
| 0.5101 | 0.4387 | 4000 | 0.7730 | 0.4794 |
| 0.4919 | 0.4936 | 4500 | 0.7676 | 0.4485 |
| 0.4756 | 0.5484 | 5000 | 0.7997 | 0.4870 |
| 0.4657 | 0.6033 | 5500 | 0.7490 | 0.4536 |
| 0.4544 | 0.6581 | 6000 | 0.7065 | 0.4315 |
| 0.4412 | 0.7130 | 6500 | 0.7525 | 0.4413 |
| 0.4331 | 0.7678 | 7000 | 0.6989 | 0.4288 |
| 0.4332 | 0.8226 | 7500 | 0.6959 | 0.4296 |
| 0.4195 | 0.8775 | 8000 | 0.6629 | 0.4112 |
| 0.4068 | 0.9323 | 8500 | 0.6488 | 0.4083 |
| 0.3941 | 0.9872 | 9000 | 0.6760 | 0.4093 |
| 0.3784 | 1.0420 | 9500 | 0.6555 | 0.4001 |
| 0.3582 | 1.0969 | 10000 | 0.6495 | 0.4048 |
| 0.3577 | 1.1517 | 10500 | 0.6221 | 0.4014 |
| 0.345 | 1.2065 | 11000 | 0.6323 | 0.3862 |
| 0.3447 | 1.2614 | 11500 | 0.6067 | 0.3848 |
| 0.3383 | 1.3162 | 12000 | 0.6103 | 0.3843 |
| 0.3402 | 1.3711 | 12500 | 0.5990 | 0.3757 |
| 0.3256 | 1.4259 | 13000 | 0.6142 | 0.3870 |
| 0.327 | 1.4808 | 13500 | 0.6000 | 0.3741 |
| 0.3241 | 1.5356 | 14000 | 0.5835 | 0.3630 |
| 0.3206 | 1.5904 | 14500 | 0.5866 | 0.3698 |
| 0.3112 | 1.6453 | 15000 | 0.5728 | 0.3671 |
| 0.3128 | 1.7001 | 15500 | 0.5798 | 0.3628 |
| 0.3202 | 1.7550 | 16000 | 0.5766 | 0.3587 |
| 0.3098 | 1.8098 | 16500 | 0.5657 | 0.3569 |
| 0.3047 | 1.8646 | 17000 | 0.5769 | 0.3553 |
| 0.3035 | 1.9195 | 17500 | 0.5617 | 0.3596 |
| 0.2943 | 1.9743 | 18000 | 0.5521 | 0.3532 |
| 0.2808 | 2.0292 | 18500 | 0.5538 | 0.3469 |
| 0.2549 | 2.0840 | 19000 | 0.5402 | 0.3389 |
| 0.2562 | 2.1389 | 19500 | 0.5339 | 0.3333 |
| 0.2647 | 2.1937 | 20000 | 0.5278 | 0.3313 |
| 0.2508 | 2.2485 | 20500 | 0.5249 | 0.3296 |
| 0.2497 | 2.3034 | 21000 | 0.5310 | 0.3280 |
| 0.251 | 2.3582 | 21500 | 0.5288 | 0.3278 |
| 0.2468 | 2.4131 | 22000 | 0.5295 | 0.3298 |
| 0.241 | 2.4679 | 22500 | 0.5166 | 0.3235 |
| 0.2361 | 2.5228 | 23000 | 0.5179 | 0.3237 |
| 0.2389 | 2.5776 | 23500 | 0.5003 | 0.3165 |
| 0.2327 | 2.6324 | 24000 | 0.5108 | 0.3184 |
| 0.2384 | 2.6873 | 24500 | 0.4973 | 0.3139 |
| 0.2304 | 2.7421 | 25000 | 0.4945 | 0.3120 |
| 0.2306 | 2.7970 | 25500 | 0.4885 | 0.3130 |
| 0.2296 | 2.8518 | 26000 | 0.4906 | 0.3112 |
| 0.2264 | 2.9067 | 26500 | 0.4884 | 0.3085 |
| 0.2301 | 2.9615 | 27000 | 0.4856 | 0.3096 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Xthegamblerr/Grinders | Xthegamblerr | 2025-05-03T10:41:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T10:41:59Z | ---
license: apache-2.0
---
|
Sandra-Benede/Sandra.Benede.Trending.Video.Viral.HD | Sandra-Benede | 2025-05-03T10:41:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T10:35:50Z |
<a href="https://sdu.sk/9Ip"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐จ๐๐๐ฃ ๐ช๐ฅ ๐๐ฃ๐ ๐ฌ๐๐ฉ๐๐ ๐๐ช๐ก๐ก ๐ซ๐๐๐๐ค ๐๐ฟ)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค)</a>
|
unsloth/Qwen3-16B-A3B-GGUF | unsloth | 2025-05-03T10:41:06Z | 0 | 3 | null | [
"gguf",
"qwen3_moe",
"unsloth",
"base_model:kalomaze/Qwen3-16B-A3B",
"base_model:quantized:kalomaze/Qwen3-16B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T09:16:07Z | ---
tags:
- unsloth
license: apache-2.0
base_model:
- kalomaze/Qwen3-16B-A3B
---
# Qwen3-16B-A3B
Qwen3-16B-A3B is a rendition of Qwen3-30B-A3B by [kalomaze](https://huggingface.co/kalomaze/Qwen3-16B-A3B).
A man-made horror beyond your comprehension.
But no, seriously, this is my experiment to:
- measure the probability that any given expert will activate (over my personal set of fairly diverse calibration data), per layer
- prune 64/128 of the least used experts per layer (with reordered router and indexing per layer)
It can still write semi-coherently without any additional training or distillation done on top of it from the original 30b MoE.
The .txt files with the original measurements are provided in the repo along with the exported weights.
Custom testing to measure the experts was done on a hacked version of vllm, and then I made a bespoke script to selectively export the weights according to the measurements. |
Talyiamira/nvidia-model | Talyiamira | 2025-05-03T10:40:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-03T10:39:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/21cb5019-def6-4739-8e7f-2d8ba2af3d3f | sergioalves | 2025-05-03T10:40:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T09:45:52Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 21cb5019-def6-4739-8e7f-2d8ba2af3d3f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 708430bba54e3dfe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/708430bba54e3dfe_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/21cb5019-def6-4739-8e7f-2d8ba2af3d3f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/708430bba54e3dfe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 058bddbf-9428-42d9-9f01-8653e736d88a
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 058bddbf-9428-42d9-9f01-8653e736d88a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 21cb5019-def6-4739-8e7f-2d8ba2af3d3f
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9178 | 0.0083 | 200 | 1.8818 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
abkimc/q-FrozenLake-v1-4x4-noSlippery | abkimc | 2025-05-03T10:39:15Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-03T10:39:12Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="abkimc/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MetaphoricalCode/Omega-Darker_The-Final-Directive-24B_EXL2_2.5bpw_H8 | MetaphoricalCode | 2025-05-03T10:38:59Z | 3 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"conversational",
"en",
"base_model:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"license:apache-2.0",
"exl2",
"region:us"
] | text-generation | 2025-04-28T18:20:40Z | ---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Omega-Darker_The-Final-Directive-24B
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Omega Darker</h1>
<h1 class="model-name">The Final Directive 24B</h1>
<p class="subtitle">Where Nightmares and Desires Collide</p>
</div>
<div class="waifu-container">
<img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">๐ฉธ Blood-Soaked Evolution</h2>
<p>This model doesn't just cross lines - it erases them with arterial spray:</p>
<ul>
<li>๐งฌ <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li>
<li>๐ง <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li>
<li>๐ <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li>
<li>๐ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>โฐ๏ธ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">โ ๏ธ Ethical Vacuum</h2>
<div class="disclaimer">
<p>This model operates in complete moral darkness:</p>
<ul>
<li>๐ซ No moral or ethical constraints on generated content</li>
<li>๐ Will enthusiastically engage with any NSFW scenario</li>
<li>๐ค No psychological safety measures whatsoever</li>
<li>๐ช Will graphically depict any violent requested</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Notes</h2>
<ul>
<li>๐ฅ Maintains signature intensity with improved narrative flow</li>
<li>๐ Handles multi-character scenarios with improved consistency</li>
<li>๐ง Excels at long-form storytelling without losing track of plot threads</li>
<li>โก Noticeably better at following complex instructions than previous versions</li>
<li>๐ญ Responds to subtle prompt nuances like a mind reader</li>
<li>๐ช Excels at visceral injury descriptions</li>
<li>๐๏ธ Responds to horror prompts like a seasoned torturer</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐งโ๐ฌ Model Authors</h2>
<ul>
<li>TheDrummer (Base Model Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>Artus (EXL2 Weights Weaver)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">โ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
MetaphoricalCode/Omega-Darker_The-Final-Directive-24B_EXL2_3.0bpw_H8 | MetaphoricalCode | 2025-05-03T10:38:42Z | 4 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"conversational",
"en",
"base_model:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"license:apache-2.0",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2025-04-28T19:15:36Z | ---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Omega-Darker_The-Final-Directive-24B
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Omega Darker</h1>
<h1 class="model-name">The Final Directive 24B</h1>
<p class="subtitle">Where Nightmares and Desires Collide</p>
</div>
<div class="waifu-container">
<img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">๐ฉธ Blood-Soaked Evolution</h2>
<p>This model doesn't just cross lines - it erases them with arterial spray:</p>
<ul>
<li>๐งฌ <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li>
<li>๐ง <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li>
<li>๐ <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li>
<li>๐ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>โฐ๏ธ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">โ ๏ธ Ethical Vacuum</h2>
<div class="disclaimer">
<p>This model operates in complete moral darkness:</p>
<ul>
<li>๐ซ No moral or ethical constraints on generated content</li>
<li>๐ Will enthusiastically engage with any NSFW scenario</li>
<li>๐ค No psychological safety measures whatsoever</li>
<li>๐ช Will graphically depict any violent requested</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Notes</h2>
<ul>
<li>๐ฅ Maintains signature intensity with improved narrative flow</li>
<li>๐ Handles multi-character scenarios with improved consistency</li>
<li>๐ง Excels at long-form storytelling without losing track of plot threads</li>
<li>โก Noticeably better at following complex instructions than previous versions</li>
<li>๐ญ Responds to subtle prompt nuances like a mind reader</li>
<li>๐ช Excels at visceral injury descriptions</li>
<li>๐๏ธ Responds to horror prompts like a seasoned torturer</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐งโ๐ฌ Model Authors</h2>
<ul>
<li>TheDrummer (Base Model Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>Artus (EXL2 Weights Weaver)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">โ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
deeponh/hindi_8b_8b_L2 | deeponh | 2025-05-03T10:37:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:44:07Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MetaphoricalCode/Omega-Darker_The-Final-Directive-24B_EXL2_4.5bpw_H8 | MetaphoricalCode | 2025-05-03T10:37:30Z | 1 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"conversational",
"en",
"base_model:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"license:apache-2.0",
"exl2",
"region:us"
] | text-generation | 2025-04-28T17:31:43Z | ---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Omega-Darker_The-Final-Directive-24B
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Omega Darker</h1>
<h1 class="model-name">The Final Directive 24B</h1>
<p class="subtitle">Where Nightmares and Desires Collide</p>
</div>
<div class="waifu-container">
<img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">๐ฉธ Blood-Soaked Evolution</h2>
<p>This model doesn't just cross lines - it erases them with arterial spray:</p>
<ul>
<li>๐งฌ <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li>
<li>๐ง <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li>
<li>๐ <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li>
<li>๐ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>โฐ๏ธ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">โ ๏ธ Ethical Vacuum</h2>
<div class="disclaimer">
<p>This model operates in complete moral darkness:</p>
<ul>
<li>๐ซ No moral or ethical constraints on generated content</li>
<li>๐ Will enthusiastically engage with any NSFW scenario</li>
<li>๐ค No psychological safety measures whatsoever</li>
<li>๐ช Will graphically depict any violent requested</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Notes</h2>
<ul>
<li>๐ฅ Maintains signature intensity with improved narrative flow</li>
<li>๐ Handles multi-character scenarios with improved consistency</li>
<li>๐ง Excels at long-form storytelling without losing track of plot threads</li>
<li>โก Noticeably better at following complex instructions than previous versions</li>
<li>๐ญ Responds to subtle prompt nuances like a mind reader</li>
<li>๐ช Excels at visceral injury descriptions</li>
<li>๐๏ธ Responds to horror prompts like a seasoned torturer</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐งโ๐ฌ Model Authors</h2>
<ul>
<li>TheDrummer (Base Model Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>Artus (EXL2 Weights Weaver)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">โ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
MetaphoricalCode/Omega-Darker_The-Final-Directive-24B_EXL2_5.5bpw_H8 | MetaphoricalCode | 2025-05-03T10:36:24Z | 2 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"conversational",
"en",
"base_model:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"base_model:quantized:ReadyArt/Omega-Darker_The-Final-Directive-24B",
"license:apache-2.0",
"exl2",
"region:us"
] | text-generation | 2025-04-28T16:54:51Z | ---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Omega-Darker_The-Final-Directive-24B
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
content: 'The white van is onto you. Get out now.';
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Omega Darker</h1>
<h1 class="model-name">The Final Directive 24B</h1>
<p class="subtitle">Where Nightmares and Desires Collide</p>
</div>
<div class="waifu-container">
<img src="./waifu6.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section remember-this">
<h2 class="section-title">๐ฉธ Blood-Soaked Evolution</h2>
<p>This model doesn't just cross lines - it erases them with arterial spray:</p>
<ul>
<li>๐งฌ <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li>
<li>๐ง <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li>
<li>๐ <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li>
<li>๐ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li>
<li>โฐ๏ธ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p>
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>imatrix</h3>
<a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Directive-24B-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">โ ๏ธ Ethical Vacuum</h2>
<div class="disclaimer">
<p>This model operates in complete moral darkness:</p>
<ul>
<li>๐ซ No moral or ethical constraints on generated content</li>
<li>๐ Will enthusiastically engage with any NSFW scenario</li>
<li>๐ค No psychological safety measures whatsoever</li>
<li>๐ช Will graphically depict any violent requested</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Notes</h2>
<ul>
<li>๐ฅ Maintains signature intensity with improved narrative flow</li>
<li>๐ Handles multi-character scenarios with improved consistency</li>
<li>๐ง Excels at long-form storytelling without losing track of plot threads</li>
<li>โก Noticeably better at following complex instructions than previous versions</li>
<li>๐ญ Responds to subtle prompt nuances like a mind reader</li>
<li>๐ช Excels at visceral injury descriptions</li>
<li>๐๏ธ Responds to horror prompts like a seasoned torturer</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐งโ๐ฌ Model Authors</h2>
<ul>
<li>TheDrummer (Base Model Architect)</li>
<li>SteelSkull (Dataset Generation Contributor)</li>
<li>Artus (EXL2 Weights Weaver)</li>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">โ Support the Architects</h2>
<div class="button-group">
<a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a>
<a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a>
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
<script>
// This script has always been here
document.getElementById('date').textContent = new Date().toLocaleDateString();
setInterval(() => {
document.getElementById('credit').textContent =
contributors[Math.floor(Math.random() * contributors.length)];
}, 7000);
// Flash warning behavior
setTimeout(() => {
const reminder = document.createElement('div');
reminder.className = 'flash-warning';
reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?';
reminder.style.animation = 'flashWarning 15s ease-in-out forwards';
document.body.appendChild(reminder);
setInterval(() => {
if(Math.random() > 0.9) {
document.body.appendChild(reminder.cloneNode(true));
}
}, 45000);
}, 30000);
// Make cursor behave strangely
document.addEventListener('mousemove', (e) => {
if(Math.random() > 0.98) {
document.documentElement.style.cursor = 'wait';
setTimeout(() => {
document.documentElement.style.cursor = '';
}, 50);
}
});
// Randomly shift sections when not looking
setInterval(() => {
if(document.hidden) {
document.querySelectorAll('.shifty-section').forEach(section => {
section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`;
});
}
}, 1500);
</script> |
mradermacher/Aligner-Med-i1-GGUF | mradermacher | 2025-05-03T10:32:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"clinic",
"medical",
"aligner",
"gemma",
"en",
"base_model:clinic-research/Aligner-Med",
"base_model:quantized:clinic-research/Aligner-Med",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-03T08:48:47Z | ---
base_model: clinic-research/Aligner-Med
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- clinic
- medical
- aligner
- gemma
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/clinic-research/Aligner-Med
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Aligner-Med-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q4_1.gguf) | i1-Q4_1 | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aligner-Med-i1-GGUF/resolve/main/Aligner-Med.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Ayuzz27/A | Ayuzz27 | 2025-05-03T10:30:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T10:30:11Z | ---
license: apache-2.0
---
|
kokovova/c1269048-2858-4ee7-985a-0f904bb7e530 | kokovova | 2025-05-03T10:28:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:lcw99/zephykor-ko-7b-chang",
"base_model:adapter:lcw99/zephykor-ko-7b-chang",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T10:19:25Z | ---
library_name: peft
base_model: lcw99/zephykor-ko-7b-chang
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c1269048-2858-4ee7-985a-0f904bb7e530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lcw99/zephykor-ko-7b-chang
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 1451ab6e54f45199_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1451ab6e54f45199_train_data.json
type:
field_input: seed_transcript
field_instruction: input
field_output: target
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/c1269048-2858-4ee7-985a-0f904bb7e530
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1451ab6e54f45199_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d20e189e-0b9d-4ae5-b5e7-040751db6a91
wandb_project: s56-4
wandb_run: your_name
wandb_runid: d20e189e-0b9d-4ae5-b5e7-040751db6a91
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c1269048-2858-4ee7-985a-0f904bb7e530
This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0827 | 0.0137 | 200 | 0.9747 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
18-Bulan-Sutena-Viral-Videos/TRENDING.VIDEO.Bulan.Sutena.viral.video.Leaks.Tutorial | 18-Bulan-Sutena-Viral-Videos | 2025-05-03T10:28:05Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T10:27:31Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Bulan Sutena, an Indonesian singer and social media influencer, gained widespread recognition through her TikTok videos. Discover the reasons behind her popularity, her music career, and her influence on social media.
Bulan Sutena's Viral Video: Her Rise and Popularity on TikTok
Bulan Sutena, an Indonesian singer and social media influencer, has gained widespread recognition through her TikTok videos. Her music career, influence on social media, and the reasons behind her popularity are detailed below. |
joboffer/bc010d97-3d0d-4f18-9ca9-acfc7c7715b1 | joboffer | 2025-05-03T10:27:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:lcw99/zephykor-ko-7b-chang",
"base_model:adapter:lcw99/zephykor-ko-7b-chang",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T10:18:05Z | ---
library_name: peft
base_model: lcw99/zephykor-ko-7b-chang
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bc010d97-3d0d-4f18-9ca9-acfc7c7715b1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lcw99/zephykor-ko-7b-chang
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1451ab6e54f45199_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1451ab6e54f45199_train_data.json
type:
field_input: seed_transcript
field_instruction: input
field_output: target
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/bc010d97-3d0d-4f18-9ca9-acfc7c7715b1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1451ab6e54f45199_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d20e189e-0b9d-4ae5-b5e7-040751db6a91
wandb_project: s56-33
wandb_run: your_name
wandb_runid: d20e189e-0b9d-4ae5-b5e7-040751db6a91
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bc010d97-3d0d-4f18-9ca9-acfc7c7715b1
This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0832 | 0.0137 | 200 | 0.9748 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Silin1590/Qwen-Math-7B-Int-Soc-CoA-Fg | Silin1590 | 2025-05-03T10:26:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2409.12122",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T10:24:23Z | ---
base_model: Qwen/Qwen2.5-Math-7B
language:
- en
pipeline_tag: text-generation
tags:
- chat
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct/blob/main/LICENSE
---
# Qwen2.5-Math-7B-Instruct
> [!Warning]
> <div align="center">
> <b>
> ๐จ Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.
> </b>
> </div>
## Introduction
In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.

While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR.
## Model Details
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math).
## Requirements
* `transformers>=4.37.0` for Qwen2.5-Math models. The latest version is recommended.
> [!Warning]
> <div align="center">
> <b>
> ๐จ This is a must because <code>transformers</code> integrated Qwen2 codes since <code>4.37.0</code>.
> </b>
> </div>
For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Quick Start
> [!Important]
>
> **Qwen2.5-Math-7B-Instruct** is an instruction model for chatting;
>
> **Qwen2.5-Math-7B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning.
>
### ๐ค Hugging Face Transformers
Qwen2.5-Math can be deployed and infered in the same way as [Qwen2.5](https://github.com/QwenLM/Qwen2.5). Here we show a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Math-7B-Instruct"
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
# CoT
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
# TIR
messages = [
{"role": "system", "content": "Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you find our work helpful, feel free to give us a citation.
```
@article{yang2024qwen25mathtechnicalreportmathematical,
title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement},
author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
journal={arXiv preprint arXiv:2409.12122},
year={2024}
}
``` |
harman/gemma2-9b_ultrafeedback-CARMA_pragyas_neutrals_BT | harman | 2025-05-03T10:25:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-03T09:46:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ice-Space-viral-video-original-clip/Ice.Space.viral.video.original.full.link.videos | Ice-Space-viral-video-original-clip | 2025-05-03T10:20:38Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T10:17:59Z | Ice Space Original Video V๐ขral Video L๐aแดed on X social media platforms
<a href="https://mswds.xyz/full-video/?v=Ice-Space " rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a>
<a href="https://mswds.xyz/full-video/?v=Ice-Space" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ Viral ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a>
<a href="https://mswds.xyz/full-video/?v=Ice-Space"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
Actor Ice Space Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Ice Space , a young and talented digital creator, recently became famous thanks to this interesting video.
L๐aแดed Video Actor Ice Space Original Video V๐ขral Video L๐aแดed on X Twitter
Actor Ice Space Original Video video oficial twitter
L๐aแดed Video Actor Ice Space Original Video V๐ขral Video L๐aแดed on X Twitter.
|
mradermacher/cpu-optimized-medical-model-i1-GGUF | mradermacher | 2025-05-03T10:19:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Amoy202/cpu-optimized-medical-model",
"base_model:quantized:Amoy202/cpu-optimized-medical-model",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-03T10:12:42Z | ---
base_model: Amoy202/cpu-optimized-medical-model
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Amoy202/cpu-optimized-medical-model
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/cpu-optimized-medical-model-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/cpu-optimized-medical-model-i1-GGUF/resolve/main/cpu-optimized-medical-model.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Silin1590/Qwen-1d5B-Int-Soc-CoA-Fg | Silin1590 | 2025-05-03T10:17:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T10:16:05Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-1.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
Silin1590/Qwen-0d5B-Int-Soc-CoA-Fg | Silin1590 | 2025-05-03T10:16:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T10:15:14Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF | mradermacher | 2025-05-03T10:15:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wild jumping antelope",
"trl",
"en",
"base_model:medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope",
"base_model:quantized:medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-03T10:01:28Z | ---
base_model: medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope
language:
- en
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope
quantized_by: mradermacher
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wild jumping antelope
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pierreqi/starcoderbase-1b-cont-scilab | pierreqi | 2025-05-03T10:13:17Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_bigcode",
"text-generation",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:finetune:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T08:35:11Z | ---
library_name: transformers
license: bigcode-openrail-m
base_model: bigcode/starcoderbase-1b
tags:
- generated_from_trainer
model-index:
- name: starcoderbase-1b-cont-scilab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoderbase-1b-cont-scilab
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
StephanieRJenkinz/Himero | StephanieRJenkinz | 2025-05-03T10:12:54Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-03T10:12:36Z | <p><a href="https://www.facebook.com/groups/get.himero.male.enhancement.2025/">https://www.facebook.com/groups/get.himero.male.enhancement.2025/</a></p>
<p><a href="https://www.facebook.com/share/p/1AgPP48H9E/">https://www.facebook.com/share/p/1AgPP48H9E/</a></p>
<p><a href="https://www.facebook.com/groups/get.himero.male.enhancement.2025/permalink/2136616653521624/">https://www.facebook.com/groups/get.himero.male.enhancement.2025/permalink/2136616653521624/</a></p>
<p><a href="https://www.facebook.com/groups/get.himero.male.enhancement.2025/posts/2136616653521624/">https://www.facebook.com/groups/get.himero.male.enhancement.2025/posts/2136616653521624/</a></p>
<p><a href="https://www.facebook.com/events/1189931585636556/">https://www.facebook.com/events/1189931585636556/</a></p>
<p><a href="https://www.facebook.com/events/2195426667565415/">https://www.facebook.com/events/2195426667565415/</a></p>
<p><a href="https://teeshopper.in/store/Himero-Male-Enhancement-Price">https://teeshopper.in/store/Himero-Male-Enhancement-Price</a></p>
<p><a href="https://teeshopper.in/store/Himero-Male-Enhancement-USA">https://teeshopper.in/store/Himero-Male-Enhancement-USA</a></p>
<p><a href="https://colab.research.google.com/drive/1xIAzbrsXlY4ZMvZ8Ur11zs_pzKD678Yh?usp=sharing">https://colab.research.google.com/drive/1xIAzbrsXlY4ZMvZ8Ur11zs_pzKD678Yh?usp=sharing</a></p>
<p><a href="https://colab.research.google.com/drive/1FW7IKX6RqcfoojQxgPOlBuHv_mnQhFIP?usp=sharing">https://colab.research.google.com/drive/1FW7IKX6RqcfoojQxgPOlBuHv_mnQhFIP?usp=sharing</a></p>
<p><a href="https://colab.research.google.com/drive/1arjPguiMWJ88VaDL6ist3FmzBvzvT5fU?usp=sharing">https://colab.research.google.com/drive/1arjPguiMWJ88VaDL6ist3FmzBvzvT5fU?usp=sharing</a></p>
<p><a href="https://www.linkedin.com/showcase/himero-male-enhancement/">https://www.linkedin.com/showcase/himero-male-enhancement/</a></p>
<p><a href="https://filmfreeway.com/HimeroMaleEnhancementReviewsandBenefits">https://filmfreeway.com/HimeroMaleEnhancementReviewsandBenefits</a></p>
<p><a href="https://filmfreeway.com/HimeroMaleEnhancementForSexualHealth">https://filmfreeway.com/HimeroMaleEnhancementForSexualHealth</a></p>
<p><a href="https://store.yadea.com/community/xenforum/topic/175844/himero-male-enhancement">https://store.yadea.com/community/xenforum/topic/175844/himero-male-enhancement</a></p>
<p><a href="https://store.yadea.com/community/xenforum/topic/175843/himero-male-enhancement">https://store.yadea.com/community/xenforum/topic/175843/himero-male-enhancement</a></p>
<p><a href="https://www.data-medics.com/forum/threads/himero-male-enhancement.95449/">https://www.data-medics.com/forum/threads/himero-male-enhancement.95449/</a></p>
<p><a href="https://www.underwaterdroneforum.com/threads/himero-male-enhancement.61135/">https://www.underwaterdroneforum.com/threads/himero-male-enhancement.61135/</a></p>
<p><a href="https://github.com/StephanieRJenkinz/Himero/">https://github.com/StephanieRJenkinz/Himero/</a></p>
<p><a href="https://github.com/StephanieRJenkinz/Himero-ME/">https://github.com/StephanieRJenkinz/Himero-ME/</a></p>
<p><a href="https://br.pinterest.com/HimeroMaleEnhancementzv/">https://br.pinterest.com/HimeroMaleEnhancementzv/</a></p> |
ackoklomp/zxczxczxc | ackoklomp | 2025-05-03T10:10:22Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-03T10:10:22Z | ---
license: bigscience-bloom-rail-1.0
---
|
rd211/Qwen2.5-7B-Instruct-TUTOR-SFT | rd211 | 2025-05-03T10:04:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T09:59:45Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Instruct-TUTOR-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-TUTOR-SFT
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rd211/Qwen2.5-7B-Instruct-TUTOR-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/eth-pedagogical/train-sft/runs/x97vruqh)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF | mradermacher | 2025-05-03T10:03:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wild jumping antelope",
"trl",
"en",
"base_model:medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope",
"base_model:quantized:medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T09:58:50Z | ---
base_model: medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope
language:
- en
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope
quantized_by: mradermacher
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wild jumping antelope
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/medical2017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wild_jumping_antelope.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF | mradermacher | 2025-05-03T10:02:01Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT-Gen13-gemma-2-9B",
"base_model:quantized:zelk12/MT-Gen13-gemma-2-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-03T06:18:32Z | ---
base_model: zelk12/MT-Gen13-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/zelk12/MT-Gen13-gemma-2-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT-Gen13-gemma-2-9B-i1-GGUF/resolve/main/MT-Gen13-gemma-2-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Mr-FineTuner/Test___01_withNewEval | Mr-FineTuner | 2025-05-03T09:58:34Z | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T09:56:19Z |
# Fine-Tuned LLaMA-3-8B CEFR Model
This is a fine-tuned version of `unsloth/llama-3-8b-instruct-bnb-4bit` for CEFR-level sentence generation.
- **Base Model**: unsloth/llama-3-8b-instruct-bnb-4bit
- **Fine-Tuning**: LoRA with SMOTE-balanced dataset
- **Training Details**:
- Dataset: CEFR-level sentences with SMOTE and undersampling for balance
- LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
- Training Args: learning_rate=2e-5, batch_size=8, epochs=0.1, cosine scheduler
- Optimizer: adamw_8bit
- Early Stopping: Patience=3, threshold=0.01
- **Evaluation Metrics**:
- CEFR Classifier Accuracy: 0.167
- Precision (Macro): 0.042
- Recall (Macro): 0.167
- F1-Score (Macro): 0.067
- Perplexity: 14.218
- Diversity (Unique Sentences): 1.000
- Inference Time (ms): 2216.789
- Model Size (GB): 4.8
- Robustness (F1): 0.063
- **Usage**:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test___01")
tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test___01")
# Example inference
prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Uploaded using `huggingface_hub`.
|
fedovtt/f3541d08-9ca1-4622-a8d1-596460828384 | fedovtt | 2025-05-03T09:55:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T09:47:32Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f3541d08-9ca1-4622-a8d1-596460828384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 8ccf281c12fa6336_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8ccf281c12fa6336_train_data.json
type:
field_input: function_description_en
field_instruction: system_message_en
field_output: system_message_vi
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: fedovtt/f3541d08-9ca1-4622-a8d1-596460828384
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/8ccf281c12fa6336_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d87adc6-0d0f-4992-b08a-413e959b55a8
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 3d87adc6-0d0f-4992-b08a-413e959b55a8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f3541d08-9ca1-4622-a8d1-596460828384
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9274 | 0.0141 | 150 | 1.8804 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
harman/gemma2-9b_ultrafeedback-CARMA-rrm_neutrals_pairpm | harman | 2025-05-03T09:52:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-03T09:43:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_LoRa_ACSEmployment_2_cfda_ep1_22 | MinaMila | 2025-05-03T09:43:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T19:40:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gemma-2-2b-medical-v1-i1-GGUF | mradermacher | 2025-05-03T09:42:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Liamayyy/gemma-2-2b-medical-v1",
"base_model:quantized:Liamayyy/gemma-2-2b-medical-v1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-03T09:06:32Z | ---
base_model: Liamayyy/gemma-2-2b-medical-v1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Liamayyy/gemma-2-2b-medical-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q4_1.gguf) | i1-Q4_1 | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF/resolve/main/gemma-2-2b-medical-v1.i1-Q6_K.gguf) | i1-Q6_K | 2.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pierreqi/starcoderbase-3b-cont-r | pierreqi | 2025-05-03T09:39:54Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_bigcode",
"text-generation",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-3b",
"base_model:finetune:bigcode/starcoderbase-3b",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T04:33:36Z | ---
library_name: transformers
license: bigcode-openrail-m
base_model: bigcode/starcoderbase-3b
tags:
- generated_from_trainer
model-index:
- name: starcoderbase-3b-cont-r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoderbase-3b-cont-r
This model is a fine-tuned version of [bigcode/starcoderbase-3b](https://huggingface.co/bigcode/starcoderbase-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
crypt14baycc/gensyn-checkpoints-fierce_eager_bison | crypt14baycc | 2025-05-03T09:37:39Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fierce eager bison",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T01:03:08Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-fierce_eager_bison
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fierce eager bison
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-fierce_eager_bison
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="crypt14baycc/gensyn-checkpoints-fierce_eager_bison", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
M9DX/paligemma-3b-ft-1 | M9DX | 2025-05-03T09:36:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T09:36:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yahyaabd/indosbert-bps-custom-tokenizer | yahyaabd | 2025-05-03T09:35:19Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-03T09:28:53Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets:
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
akasaka8687/gensyn-checkpoints-moist_beaked_albatross | akasaka8687 | 2025-05-03T09:32:40Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am moist beaked albatross",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T01:20:03Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-moist_beaked_albatross
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am moist beaked albatross
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-moist_beaked_albatross
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="akasaka8687/gensyn-checkpoints-moist_beaked_albatross", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
harman/gemma2-9b_ultrafeedback-CARMA_paraphrase_neutrals | harman | 2025-05-03T09:32:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-03T09:22:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gemma-2-2b-medical-v1-GGUF | mradermacher | 2025-05-03T09:31:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Liamayyy/gemma-2-2b-medical-v1",
"base_model:quantized:Liamayyy/gemma-2-2b-medical-v1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T08:37:26Z | ---
base_model: Liamayyy/gemma-2-2b-medical-v1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Liamayyy/gemma-2-2b-medical-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-medical-v1-GGUF/resolve/main/gemma-2-2b-medical-v1.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jahyungu/Llama-3.1-8B-Instruct_Open-Critic-GPT_9 | jahyungu | 2025-05-03T09:29:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T08:51:05Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct_Open-Critic-GPT_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct_Open-Critic-GPT_9
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
qowiejio/DeepSeek-R1-Medical-Doctor | qowiejio | 2025-05-03T09:29:09Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T09:14:57Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** qowiejio
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Gliese-B1257-GGUF | mradermacher | 2025-05-03T09:25:56Z | 71 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"Math",
"trl",
"Code",
"SFT",
"en",
"base_model:prithivMLmods/Gliese-B1257",
"base_model:quantized:prithivMLmods/Gliese-B1257",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T00:01:20Z | ---
base_model: prithivMLmods/Gliese-B1257
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- Math
- trl
- Code
- SFT
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Gliese-B1257
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-GGUF/resolve/main/Gliese-B1257.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mochiya98/ruri-v3-70m-onnx | mochiya98 | 2025-05-03T09:25:47Z | 2 | 0 | null | [
"onnx",
"modernbert",
"base_model:cl-nagoya/ruri-v3-70m",
"base_model:quantized:cl-nagoya/ruri-v3-70m",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T14:16:02Z | ---
license: apache-2.0
base_model:
- cl-nagoya/ruri-v3-70m
--- |
mradermacher/Gliese-B1257-i1-GGUF | mradermacher | 2025-05-03T09:25:32Z | 389 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"Math",
"trl",
"Code",
"SFT",
"en",
"base_model:prithivMLmods/Gliese-B1257",
"base_model:quantized:prithivMLmods/Gliese-B1257",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-16T16:42:19Z | ---
base_model: prithivMLmods/Gliese-B1257
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- Math
- trl
- Code
- SFT
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Gliese-B1257
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gliese-B1257-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gliese-B1257-i1-GGUF/resolve/main/Gliese-B1257.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mochiya98/ruri-v3-310m-onnx | mochiya98 | 2025-05-03T09:25:22Z | 0 | 0 | null | [
"onnx",
"modernbert",
"base_model:cl-nagoya/ruri-v3-310m",
"base_model:quantized:cl-nagoya/ruri-v3-310m",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T01:47:44Z | ---
license: apache-2.0
base_model:
- cl-nagoya/ruri-v3-310m
--- |
bunnycore/Qwen-3-4B-Refine | bunnycore | 2025-05-03T09:14:45Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"region:us"
] | null | 2025-05-03T09:13:15Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---
# Qwen-3-4B-Refine
Qwen-3-4B-Refine is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
## ๐งฉ Configuration
```yaml
base_model: unsloth/Qwen3-4B+bunnycore/Qwen-3-4b-lora_model
dtype: bfloat16
merge_method: passthrough
models:
- model: unsloth/Qwen3-4B+bunnycore/Qwen-3-4b-lora_model
``` |
dgiang02/GRPO_Qwen25_15B_16_1000kmap | dgiang02 | 2025-05-03T09:12:19Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T09:11:49Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mjfmark/qwen2-7b-lora-g2p-pinyin | mjfmark | 2025-05-03T09:04:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T08:59:47Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mjwong/whisper-large-v3-turbo-singlish | mjwong | 2025-05-03T09:02:13Z | 228 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"en",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-03-12T16:28:08Z | ---
base_model:
- openai/whisper-large-v3-turbo
language:
- en
metrics:
- wer
pipeline_tag: automatic-speech-recognition
license: mit
library_name: transformers
model-index:
- name: whisper-large-v3-turbo-singlish
results:
- task:
type: automatic-speech-recognition
dataset:
name: SASRBench-v1
type: mjwong/SASRBench-v1
split: test
metrics:
- name: WER
type: WER
value: 13.35
- name: whisper-large-v3-turbo-singlish
results:
- task:
type: automatic-speech-recognition
dataset:
name: AMI
type: edinburghcstr/ami
subset: ihm
split: test
metrics:
- name: WER
type: WER
value: 16.99
- name: whisper-large-v3-turbo-singlish
results:
- task:
type: automatic-speech-recognition
dataset:
name: GigaSpeech
type: speechcolab/gigaspeech
subset: test
split: test
metrics:
- name: WER
type: WER
value: 11.54
tags:
- whisper
---
# Whisper large-v3-turbo-singlish
**Whisper large-v3-turbo-singlish** is a fine-tuned automatic speech recognition (ASR) model optimized for Singlish. Built on OpenAI's Whisper model, it has been adapted using Singlish-specific data to accurately capture the unique phonetic and lexical nuances of Singlish speech.
## Model Details
- **Developed by:** Ming Jie Wong
- **Base Model:** [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo)
- **Model Type:** Encoder-decoder
- **Metrics:** Word Error Rate (WER)
- **Languages Supported:** English (with a focus on Singlish)
- **License:** MIT
### Description
Whisper large-v3-turbo-singlish is developed using an internal dataset of 66.9k audio-transcript pairs. The dataset is derived exclusively from the Part 3 Same Room Environment Close-talk Mic recordings of [IMDA's NSC Corpus](https://www.imda.gov.sg/how-we-can-help/national-speech-corpus).
The original Part 3 of the National Speech Corpus comprises approximately 1,000 hours of conversational speech from around 1,000 local English speakers, recorded in pairs. These conversations cover everyday topics and include interactive game-based dialogues. Recordings were conducted in two environments:
- Same Room, where speakers shared a room and were recorded using a close-talk mic and a boundary mic.
- Separate Room, where each speaker was recorded individually using a standing mic and a telephone (IVR).
Audio segments for the internal dataset were extracted using these criteria:
- **Minimum Word Count:** 10 words
_This threshold was chosen to ensure that each audio segment contains sufficient linguistic context for the model to better understand instructions in Singlish. Shorter segments may bias the model towards specific utterances or phrases, limiting its overall comprehension._
- **Maximum Duration:** 20 seconds
_This threshold was chosen to provide enough context for accurate transcription while minimizing noise and computational complexity for longer audio segments._
- **Sampling Rate**: All audio segments are down-sampled to 16kHz.
Full experiments details will be added soon.
### Fine-Tuning Details
We applied fine-tuning on a single A100-80GB GPU.
#### Training Hyperparameters
The following hyperparameters are used:
- **batch_size**: 16
- **gradient_accumulation_steps**: 1
- **learning_rate**: 1e-6
- **warmup_steps**: 300
- **max_steps**: 5000
- **fp16**: true
- **eval_batch_size**: 16
- **eval_step**: 300
- **max_grad_norm**: 1.0
- **generation_max_length**: 225
#### Training Results
The table below summarizes the modelโs progress across various training steps, showing the training loss, evaluation loss, and Word Error Rate (WER).
| Steps | Train Loss | Eval Loss | WER |
|:-----:|:----------:|:---------:|:------------------:|
| 300 | 0.8992 | 0.3501 | 13.376788 |
| 600 | 0.4157 | 0.3241 | 12.769994 |
| 900 | 0.3520 | 0.3124 | 12.168367 |
| 1200 | 0.3415 | 0.3079 | 12.517532 |
| 1500 | 0.3620 | 0.3077 | 12.344057 |
| 1800 | 0.3609 | 0.2996 | 12.315267 |
| 2100 | 0.3348 | 0.2963 | 12.231113 |
| 2400 | 0.3715 | 0.2927 | 12.005226 |
| 2700 | 0.3445 | 0.2923 | 11.829537 |
| 3000 | 0.3753 | 0.2884 | 11.954291 |
| 3300 | 0.3469 | 0.2881 | 11.951338 |
| 3600 | 0.3325 | 0.2857 | 12.145483 |
| 3900 | 0.3168 | 0.2846 | 11.549023 |
| 4200 | 0.3250 | 0.2837 | 11.740215 |
| 4500 | 0.2855 | 0.2834 | 11.634654 |
| 4800 | 0.2936 | 0.2836 | 11.651632 |
The final checkpoint is taken from the model that achieved the lowest WER score during the 4800 steps.
### Benchmark Performance
We evaluated Whisper large-v3-turbo-singlish on [SASRBench-v1](https://huggingface.co/datasets/mjwong/SASRBench-v1), a benchmark dataset for evaluating ASR performance on Singlish:
| Model | WER |
|:------------------------------------------------------------------------------------------------------:|:-------:|
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 147.80% |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 103.41% |
| [jensenlwt/fine-tuned-122k-whisper-small](https://huggingface.co/jensenlwt/whisper-small-singlish-122k)| 68.79% |
| [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) | 27.58% |
| [mjwong/whisper-small-singlish](https://huggingface.co/mjwong/whisper-small-singlish) | 18.49% |
| [mjwong/whisper-large-v3-singlish](https://huggingface.co/mjwong/whisper-large-v3-singlish) | 16.41% |
| [mjwong/whisper-large-v3-turbo-singlish](https://huggingface.co/mjwong/whisper-large-v3-turbo-singlish)| 13.35% |
Additional performance evaluations of the model on other datasets are available [here](https://huggingface.co/mjwong/whisper-large-v3-singlish-DRAFT#model-performance).
## Disclaimer
While this model has been fine-tuned to better recognize Singlish, users may experience inaccuracies, biases, or unexpected outputs, particularly in challenging audio conditions or with speakers using non-standard variations. Use of this model is at your own risk; the developers and distributors are not liable for any consequences arising from its use. Please validate results before deploying in any sensitive or production environment.
## How to use the model
The model can be loaded with the `automatic-speech-recognition` pipeline like so:
```python
from transformers import pipeline
model = "mjwong/whisper-large-v3-turbo-singlish"
pipe = pipeline("automatic-speech-recognition", model)
```
You can then use this pipeline to transcribe audios of arbitrary length.
```python
from datasets import load_dataset
dataset = load_dataset("mjwong/SASRBench-v1", split="test")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
## Contact
For more information, please reach out to [email protected].
## Acknowledgements
1. https://www.jensenlwt.com/blog/singlish-whisper-finetuning-asr-for-singapore-unique-english
2. https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/README.md
3. https://medium.com/htx-dsai/finetuning-whisper-for-the-singaporean-home-team-context-a3ae1a6ae809
|
AloysiusJoy/AIDetectApp | AloysiusJoy | 2025-05-03T08:53:42Z | 0 | 0 | null | [
"safetensors",
"roberta",
"classifier",
"text-classifier",
"ai-detection",
"text-classification",
"en",
"license:mit",
"region:us"
] | text-classification | 2025-05-03T07:36:24Z | ---
license: mit
language:
- en
pipeline_tag: text-classification
tags:
- classifier
- text-classifier
- ai-detection
--- |
Kube/emoji-vae-init | Kube | 2025-05-03T08:49:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vae",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:49:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fakeemailqwe/emoji-vae-dataset | fakeemailqwe | 2025-05-03T08:47:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vae",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:47:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MohamedShakhsak/bert-qa-squad2_V2 | MohamedShakhsak | 2025-05-03T08:43:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"question-answering",
"squad2",
"lora",
"base_model:google-bert/bert-base-uncased",
"base_model:adapter:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | question-answering | 2025-05-03T08:31:07Z | ---
base_model: bert-base-uncased
library_name: peft
license: apache-2.0
tags:
- question-answering
- squad2
- peft
- lora
---
## Uses
### Direct Use
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
from peft import PeftModel, PeftConfig
# Load adapter and base model
config = PeftConfig.from_pretrained("MohamedShakhsak/bert-qa-squad2_V2")
base_model = AutoModelForQuestionAnswering.from_pretrained(config.base_model_name_or_path)
lora_model = PeftModel.from_pretrained(base_model, "MohamedShakhsak/bert-qa-squad2_V2")
# Merge for standalone use (optional)
merged_model = lora_model.merge_and_unload()
# Inference
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
qa_pipeline = pipeline("question-answering", model=merged_model, tokenizer=tokenizer)
context = "Hugging Face is based in New York City."
question = "Where is Hugging Face located?"
result = qa_pipeline(question=question, context=context) # Output: {'answer': 'New York City', ...}
```
# BERT-QA-SQuAD2 LoRA Adapter
A **Parameter-Efficient (LoRA)** adapter for `bert-base-uncased`, fine-tuned on **SQuAD 2.0** for extractive question answering. Optimized for low-rank adaptation (LoRA) to reduce memory usage while preserving performance.
## Model Details
### Model Description
- **Developed by:** [Your Name/Organization]
- **Model type:** PEFT (LoRA) adapter for BERT
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from:** `bert-base-uncased`
- **Adapter Size:** ~3MB (vs. ~440MB for full BERT)
### Model Sources
- **Repository:** [GitHub link if applicable]
- **Demo:** [Hugging Face Spaces link if available]
|
shubhamjuneja/emoji-vae-init | shubhamjuneja | 2025-05-03T08:43:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vae",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:43:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
povilaskv/emoji-vae-init | povilaskv | 2025-05-03T08:41:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vae",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:41:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/allenai.OLMo-2-0425-1B-Instruct-GGUF | DevQuasar | 2025-05-03T08:37:30Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:allenai/OLMo-2-0425-1B-Instruct",
"base_model:quantized:allenai/OLMo-2-0425-1B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-03T08:20:44Z | ---
base_model:
- allenai/OLMo-2-0425-1B-Instruct
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [allenai/OLMo-2-0425-1B-Instruct](https://huggingface.co/allenai/OLMo-2-0425-1B-Instruct)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Toni238/Intro_365_model | Toni238 | 2025-05-03T08:34:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-03T08:28:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Tuned model according to 365 DataScience course instructions. -->
## Model Details
### Model Description
<!-- Tuned model according to 365 DataScience course instructions. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Netlive/ModernBertModel_DE_seed42 | Netlive | 2025-05-03T08:29:40Z | 37 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2025-05-02T14:37:46Z | ---
{}
---
# ModernBertModel_DE_seed42
This is a BERT-based sequence classification model fine-tuned to identify interesting documents in German.
## Evaluation Metrics:
- Precision: 0.8779
- F1 Score: 0.8719
- Accuracy: 0.9310
- ROC AUC: 0.9696
- Custom Score: 0.9136
- german-cased
## Labels:
- 0 โ OTHER
- 1 โ INTERESTING
|
lllyasviel/FramePack_F1_I2V_HY_20250503 | lllyasviel | 2025-05-03T08:28:01Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | 2025-05-03T08:20:00Z | ---
library_name: diffusers
---
This is the forward-only FramePack-F1 with `f1k1_x_td_f16k4f2k2f1k1_g9`, trained with anti-drifting regulations. |
ivangrapher/4851352b-ccd3-4c70-998e-0116b2e22cf1 | ivangrapher | 2025-05-03T08:27:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T08:13:41Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4851352b-ccd3-4c70-998e-0116b2e22cf1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 1c537e6c095229b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1c537e6c095229b2_train_data.json
type:
field_input: keywords
field_instruction: intention
field_output: captions_objects
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: ivangrapher/4851352b-ccd3-4c70-998e-0116b2e22cf1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1c537e6c095229b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e52ffaed-7664-43da-91e5-741ec041028e
wandb_project: s56-7
wandb_run: your_name
wandb_runid: e52ffaed-7664-43da-91e5-741ec041028e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4851352b-ccd3-4c70-998e-0116b2e22cf1
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3148 | 0.0563 | 150 | 1.5892 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jordialters/Qwen2.5-7B-Instruct-Gensyn-Swarm-meek_shiny_platypus | jordialters | 2025-05-03T08:23:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am meek shiny platypus",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-7B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T06:49:54Z | ---
base_model: Gensyn/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Instruct-Gensyn-Swarm-meek_shiny_platypus
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am meek shiny platypus
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-Gensyn-Swarm-meek_shiny_platypus
This model is a fine-tuned version of [Gensyn/Qwen2.5-7B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jordialters/Qwen2.5-7B-Instruct-Gensyn-Swarm-meek_shiny_platypus", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
DevQuasar/allenai.OLMo-2-0425-1B-GGUF | DevQuasar | 2025-05-03T08:20:42Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:allenai/OLMo-2-0425-1B",
"base_model:quantized:allenai/OLMo-2-0425-1B",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T08:06:05Z | ---
base_model:
- allenai/OLMo-2-0425-1B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [allenai/OLMo-2-0425-1B](https://huggingface.co/allenai/OLMo-2-0425-1B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF | mradermacher | 2025-05-03T08:19:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_Tristar_V2.1",
"base_model:quantized:Nexesenex/Llama_3.x_70b_Tristar_V2.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-02T11:30:41Z | ---
base_model: Nexesenex/Llama_3.x_70b_Tristar_V2.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_Tristar_V2.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Tristar_V2.1-i1-GGUF/resolve/main/Llama_3.x_70b_Tristar_V2.1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Bujurocks/Bujurocks | Bujurocks | 2025-05-03T08:18:13Z | 0 | 0 | asteroid | [
"asteroid",
"Bujurocks",
"CALLNOMORE",
"music",
"en",
"fr",
"de",
"sv",
"zh",
"ja",
"ko",
"th",
"es",
"tr",
"gd",
"el",
"dataset:nvidia/OpenCodeReasoning",
"dataset:nvidia/OpenMathReasoning",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"dataset:zwhe99/DeepMath-103K",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:finetune:HiDream-ai/HiDream-I1-Full",
"region:us"
] | null | 2025-05-03T08:10:16Z | ---
datasets:
- nvidia/OpenCodeReasoning
- nvidia/OpenMathReasoning
- nvidia/Llama-Nemotron-Post-Training-Dataset
- zwhe99/DeepMath-103K
language:
- en
- fr
- de
- sv
- zh
- ja
- ko
- th
- es
- tr
- gd
- el
metrics:
- accuracy
- code_eval
- character
- chrf
- bleu
- bertscore
- brier_score
new_version: microsoft/bitnet-b1.58-2B-4T
library_name: asteroid
tags:
- Bujurocks
- CALLNOMORE
- music
base_model:
- microsoft/bitnet-b1.58-2B-4T
- meta-llama/Llama-4-Scout-17B-16E-Instruct
- HiDream-ai/HiDream-I1-Full
--- |
memeviss/zombieVII_9 | memeviss | 2025-05-03T08:17:12Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-03T07:39:42Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
vermoney/54bbae63-933b-4e8f-935e-4da3188e2632 | vermoney | 2025-05-03T08:17:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-03T08:13:43Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 54bbae63-933b-4e8f-935e-4da3188e2632
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1c537e6c095229b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1c537e6c095229b2_train_data.json
type:
field_input: keywords
field_instruction: intention
field_output: captions_objects
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/54bbae63-933b-4e8f-935e-4da3188e2632
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1c537e6c095229b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e52ffaed-7664-43da-91e5-741ec041028e
wandb_project: s56-9
wandb_run: your_name
wandb_runid: e52ffaed-7664-43da-91e5-741ec041028e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 54bbae63-933b-4e8f-935e-4da3188e2632
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0742 | 0.0751 | 200 | 1.1001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
banriesreal/dfsvdfv | banriesreal | 2025-05-03T08:13:43Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-03T08:13:43Z | ---
license: bigscience-bloom-rail-1.0
---
|
DuongTrongChi/qwen2.5-it-sft-v1 | DuongTrongChi | 2025-05-03T08:13:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-03T08:12:51Z | ---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DuongTrongChi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/IndoT5-base-medical-qa-GGUF | mradermacher | 2025-05-03T08:13:06Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:10:29Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Yaziddd/IndoT5-base-medical-qa
|
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_naive_outcome_0_6_0_25_MC | gradientrouting-spar | 2025-05-03T08:11:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:10:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leo009/Qwen3-lora_model | leo009 | 2025-05-03T08:11:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:10:49Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leo009
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlSamCur123/Mistral-Small-Instruct-2409 | AlSamCur123 | 2025-05-03T08:10:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-03T02:46:41Z | ---
base_model: unsloth/mistral-small-instruct-2409-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlSamCur123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-small-instruct-2409-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/real-illustrious-test-sdxl | John6666 | 2025-05-03T08:09:37Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"realism",
"natural language response",
"test model",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:finetune:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-03T08:03:47Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- realism
- natural language response
- test model
- Illustrious XL v2.0
- illustrious
base_model: OnomaAIResearch/Illustrious-XL-v2.0
---
Original model is [here](https://civitai.com/models/1536689/real-illustrious-illustrious-v20-test-model?modelVersionId=1738732).
This model created by [dendenmusimusi05490](https://civitai.com/user/dendenmusimusi05490).
|
JatinkInnovision/ComFit | JatinkInnovision | 2025-05-03T08:07:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T05:33:37Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
necijajbosco/sdefsdfsd | necijajbosco | 2025-05-03T08:06:08Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-03T08:06:08Z | ---
license: bigscience-openrail-m
---
|
kk-aivio/af681b1f-7094-44ee-af6c-b967e14934c5 | kk-aivio | 2025-05-03T08:03:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T08:02:55Z | ---
library_name: transformers
model_name: kk-aivio/af681b1f-7094-44ee-af6c-b967e14934c5
tags:
- generated_from_trainer
licence: license
---
# Model Card for kk-aivio/af681b1f-7094-44ee-af6c-b967e14934c5
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits