modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fineinstructions/template_instantiator_intermediate | fineinstructions | 2025-05-02T16:09:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:05:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
masani/SFT_cumulative_parity_length_32_bitwidth_1_2048_512_Qwen2-1.5B_epoch_100_global_step_800 | masani | 2025-05-02T16:09:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T16:05:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CarlosHudson/clinica_dentaria | CarlosHudson | 2025-05-02T16:04:56Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"medical",
"translation",
"aa",
"dataset:nvidia/OpenCodeReasoning",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:adapter:deepseek-ai/DeepSeek-V3-0324",
"license:openrail",
"region:us"
] | translation | 2025-05-02T16:01:46Z | ---
license: openrail
datasets:
- nvidia/OpenCodeReasoning
language:
- aa
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-V3-0324
new_version: microsoft/bitnet-b1.58-2B-4T
pipeline_tag: translation
library_name: adapter-transformers
tags:
- medical
--- |
martin-rizzo/TinyBreaker.prototype1 | martin-rizzo | 2025-05-02T16:04:03Z | 0 | 1 | null | [
"image-generation",
"text-to-image",
"art",
"pixart-sigma",
"image",
"en",
"arxiv:2403.04692",
"base_model:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"base_model:finetune:PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"license:mit",
"region:us"
] | text-to-image | 2025-05-02T01:04:59Z | ---
license: mit
language:
- en
base_model:
- PixArt-alpha/PixArt-Sigma-XL-2-1024-MS
- stable-diffusion-v1-5/stable-diffusion-v1-5
tags:
- image-generation
- text-to-image
- art
- pixart-sigma
- image
---
# TinyBreaker (prototype1)
<div style="display:flex;justify-content: left">
<a href="https://github.com/martin-rizzo/ComfyUI-TinyBreaker"><img src="https://img.shields.io/badge/GitHub-TinyBreaker-EEE?logo=github&logoColor=white&labelColor=444444" alt="GitHub: TinyBreaker"></a>  
<a href="https://civitai.com/models/1213728"><img src="https://img.shields.io/badge/CivitAI%3A-TinyBreaker-EEE?logo=c%2B%2B&logoColor=white&labelColor=1971C2" alt="CivitAI: TinyBreaker"></a>  
</div>

## Overview
**TinyBreaker** is a hybrid two-step model (base + refiner) designed for efficient image generation on mid-end and low-end hardware. By combining the strengths of PixArt and Photon models, it delivers high-quality images with strong prompt adherence
## Key Features
- **Hybrid Two-Step Architecture**: Combines PixArt-Sigma as the base model with a refiner based on Photon (or any SD1.x model), both chosen for their low GPU consumption.
- **Efficient Parameter Usage**: The base model’s 0.6 billion parameters enable high-quality image generation with minimal computational overhead.
- **Fast Performance**: Produces high-quality 1536×1024 images in ~15 seconds on an NVIDIA RTX 3080 GPU, with ongoing work to cut generation times to under 10 seconds.
- **High Prompt Adherence**: Generates images that closely match user prompts and expectations, thanks to the robust performance of the PixArt-Sigma model and the T5 text encoder.
- **Optimized Latent Space Processing**: Leverages Tiny Autoencoders for efficient latent space conversion.
## Usage Requirements
Currently, TinyBreaker can only be used with ComfyUI. To utilize it, you'll need to install the custom nodes specific to this model through the [ComfyUI-TinyBreaker GitHub repository](https://github.com/martin-rizzo/ComfyUI-TinyBreaker).
## Limitations
- **Text Generation**: Generating legible text within images is a challenge due to PixArt's training limitations. Enhancements in this area may require extensive retraining.
- **Human Anatomy in Complex Poses**: While the model performs reliably with standard poses (e.g., standing, facing the camera), it struggles with anatomical accuracy in poses that require more complex or dynamic actions.
- **Complex Human Interactions**: The model has difficulty generating detailed scenes involving intricate interactions among people, as well as interactions between people and objects, such as collaborative tasks or dynamic object manipulation.
Note: The current "Prototype1" version of TinyBreaker utilizes PixArt-Sigma 1024 and Photon models **without any additional training or fine-tuning**. In the future, if I have the resources, I plan to train both models together to generate images of even greater quality
## Future Directions
I am dedicated to improving TinyBreaker's performance and accessibility, especially for users with mid-range or lower-end hardware. Looking forward to future updates as I continue to expand TinyBreaker's capabilities.
## Acknowledgments
* I extend my sincere thanks to the PixArt-Σ developers for their exceptional model, which has been vital to this project's development.
[PixArt-Σ GitHub Repository](https://github.com/PixArt-alpha/PixArt-sigma) | [PixArt-Σ Hugging Face Model](https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS) | [PixArt-Σ arXiv Report](https://arxiv.org/abs/2403.04692)
* Additional thanks to Ollin Boer Bohan for the Tiny AutoEncoder models, which offer efficient latent image processing and served as the foundation for the encoding, decoding, and transcoding operations in TinyBreaker.
[Tiny AutoEncoder GitHub Repository](https://github.com/madebyollin/taesd)
## Resources
- [TinyBreaker on CivitAI](https://civitai.com/models/1213728/tinybreaker): A hub for exploring generated images, prompts, and workflows created by me and the community, showcasing the model's output quality.
- [ComfyUI-TinyBreaker](https://github.com/martin-rizzo/ComfyUI-TinyBreaker): Nodes and workflows for ComfyUI to experiment with the model's capabilities.
- [TinyBreakerTools](https://github.com/martin-rizzo/TinyBreakerTools): Tools I'm building for the model, mainly to create the safetensors file for TinyBreaker.
- [AbominableWorkflows](https://github.com/martin-rizzo/AbominableWorkflows): A predecessor of TinyBreaker. My first experiment combining PixArt-Sigma and Photon without Python code, using only standard nodes from ComfyUI.
|
cryptolemon/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_rough_salamander | cryptolemon | 2025-05-02T16:02:05Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am regal rough salamander",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-11T12:39:53Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_rough_salamander
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am regal rough salamander
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_rough_salamander
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cryptolemon/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-regal_rough_salamander", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vanwdai/byt5-base-vi-ocr-correction | vanwdai | 2025-05-02T15:52:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"vi",
"dataset:vanwdai/raw_text_ocr_text",
"arxiv:1910.09700",
"base_model:google/byt5-base",
"base_model:finetune:google/byt5-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-02T13:48:11Z | ---
library_name: transformers
datasets:
- vanwdai/raw_text_ocr_text
language:
- vi
metrics:
- cer
base_model:
- google/byt5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
flyshoppa/mmarco-mnrl-bert-base-italian-uncased-finetuned-20250430 | flyshoppa | 2025-05-02T15:52:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-05-02T15:50:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
islomov/navaistt_v1_medium | islomov | 2025-05-02T15:48:27Z | 0 | 2 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio-transcription",
"uzbek",
"fine-tuned",
"speech-recognition",
"uz",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-05-02T15:17:31Z | ---
language:
- uz
license: apache-2.0
tags:
- whisper
- automatic-speech-recognition
- audio-transcription
- uzbek
- fine-tuned
- speech-recognition
---
# NavaiSTT-1v Medium - Uzbek Speech-to-Text Model
Classic Whisper medium model fine-tuned for Uzbek language. The dataset included ~700 hours of diverse audio: publicly available podcasts, Tashkent dialect podcasts, audiobooks, and Common Voice 17. Data quality was mixed with 60% human transcribed and 40% pseudo-transcribed using Gemini 2.5 Pro.
Special attention was given to Tashkent dialect audio materials, resulting in strong performance on this dialect. Future versions will include other regional dialects to improve overall coverage.
# Whitepaper
For more details on the methodology and research behind this model, visit: https://uz-speech.web.app/navaistt01m
## Model Details
- **Base Model:** Whisper Medium
- **Parameters:** 769M
- **Performance:**
- WER: ~13%
- CER: ~3.5%
## Training Data
This model was fine-tuned on approximately 700 hours of diverse Uzbek audio data including:
- Publicly available podcasts
- Tashkent dialect podcasts
- Audiobooks
- Common Voice 17 dataset
The dataset consisted of 60% human-transcribed and 40% pseudo-transcribed material (using Gemini 2.5 Pro). Special attention was given to Tashkent dialect audio materials to ensure strong performance on this dialect.
## Usage Example
```python
import torch
import torchaudio
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Load model and processor
processor = WhisperProcessor.from_pretrained("islomov/navaistt_v1_medium")
model = WhisperForConditionalGeneration.from_pretrained("islomov/navaistt_v1_medium")
def transcribe_audio(audio_path):
global model, processor
# Move to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# Load and preprocess audio
waveform, sample_rate = torchaudio.load(audio_path)
if sample_rate != 16000:
waveform = torchaudio.functional.resample(waveform, sample_rate, 16000)
# Convert to mono if needed
if waveform.shape[0] > 1:
waveform = waveform.mean(dim=0, keepdim=True)
# Process audio
input_features = processor(
waveform.squeeze().numpy(),
sampling_rate=16000,
return_tensors="pt",
language="uz"
).input_features.to(device)
# Generate transcription
with torch.no_grad():
predicted_ids = model.generate(input_features)
# Decode
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
return transcription
# Example usage
if __name__ == "__main__":
audio_file = "some_audio_max_30_sec.wav"
text = transcribe_audio(audio_file)
print(f"Transcription: {text}")
```
# Future Improvements
Future versions will include more regional Uzbek dialects to improve overall coverage.
|
NadirFartas/araT5-qg-final | NadirFartas | 2025-05-02T15:36:32Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T15:23:13Z | ---
license: apache-2.0
---
|
biustnaspust/purpur24 | biustnaspust | 2025-05-02T15:35:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:28:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Omar401/healthcare-chatbot-lora | Omar401 | 2025-05-02T15:34:45Z | 15 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-25T10:05:12Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
ibm-granite/granite-3.3-2b-base-GGUF | ibm-granite | 2025-05-02T15:32:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-3.3",
"text-generation",
"base_model:ibm-granite/granite-3.3-2b-base",
"base_model:quantized:ibm-granite/granite-3.3-2b-base",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-05-02T14:50:10Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.3
- gguf
base_model:
- ibm-granite/granite-3.3-2b-base
---
> [!NOTE]
> This repository contains models that have been converted to the GGUF format with various quantizations from an IBM Granite base model.
>
> Please reference the base model's full model card here:
> https://huggingface.co/ibm-granite/granite-3.3-2b-base |
Elahe96/LunarLander-v2 | Elahe96 | 2025-05-02T15:31:23Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-02T15:31:17Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -189.56 +/- 129.58
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Elahe96/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
KSJcompany/LLM-assignment1-KoBERT-1 | KSJcompany | 2025-05-02T15:30:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:28:21Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** KSJcompany
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sandman4/Qwen3-32B-GPTQ-8bit | sandman4 | 2025-05-02T15:30:07Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"8-bit",
"gptq",
"region:us"
] | null | 2025-05-02T15:14:50Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-32B
---
# Qwen3-32B Quantized Model
8-bit quantized version of [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) using gptqmodel.
## Quantization
```python
from datasets import load_dataset
from gptqmodel import GPTQModel, QuantizeConfig
import sys
model_id = sys.argv[1]
quant_path = "quantized_model"
# Load calibration data (1024 samples from C4)
calibration_dataset = load_dataset(
"allenai/c4",
data_files="en/c4-train.00001-of-01024.json.gz",
split="train"
).select(range(1024))["text"]
# Configure and run quantization
quant_config = QuantizeConfig(bits=8, group_size=128)
model = GPTQModel.load(model_id, quant_config)
model.quantize(calibration_dataset, batch_size=2)
model.save(quant_path)
```
## License
Apache-v2. See LICENSE.txt |
Kenazin/Mistral-7B-peft-p-tuning-v2-9 | Kenazin | 2025-05-02T15:29:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:29:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aciang/gemma-3-4b-iTK | aciang | 2025-05-02T15:29:03Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:26:22Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** aciang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AdoCleanCode/real_model_ag_news_v6 | AdoCleanCode | 2025-05-02T15:28:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:41:30Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: real_model_ag_news_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# real_model_ag_news_v6
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.2618 | 1.0 | 5100 | 3.0942 |
| 3.0766 | 2.0 | 10200 | 3.0073 |
| 2.9453 | 3.0 | 15300 | 2.9701 |
| 2.885 | 4.0 | 20400 | 2.9518 |
| 2.8458 | 5.0 | 25500 | 2.9464 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
0xtinuviel/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-freckled_domestic_capybara | 0xtinuviel | 2025-05-02T15:24:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am freckled domestic capybara",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"base_model:finetune:Gensyn/Qwen2.5-72B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T23:13:51Z | ---
base_model: Gensyn/Qwen2.5-72B-Instruct-bnb-4bit
library_name: transformers
model_name: Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-freckled_domestic_capybara
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am freckled domestic capybara
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-freckled_domestic_capybara
This model is a fine-tuned version of [Gensyn/Qwen2.5-72B-Instruct-bnb-4bit](https://huggingface.co/Gensyn/Qwen2.5-72B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xtinuviel/Qwen2.5-72B-Instruct-bnb-4bit-Gensyn-Swarm-freckled_domestic_capybara", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
toilahonganh1712/Meta-Llama-3.1-8B-q4-Travel-VungTau-LORA | toilahonganh1712 | 2025-05-02T15:22:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:22:18Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** toilahonganh1712
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF | mradermacher | 2025-05-02T15:22:24Z | 61 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"base_model:quantized:mergekit-community/Phi-4-reasoning-Line-14b-karcher",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T21:29:29Z | ---
base_model: mergekit-community/Phi-4-reasoning-Line-14b-karcher
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/Phi-4-reasoning-Line-14b-karcher
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q2_K.gguf) | Q2_K | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.IQ4_XS.gguf) | IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q5_K_M.gguf) | Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-Line-14b-karcher-GGUF/resolve/main/Phi-4-reasoning-Line-14b-karcher.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Kenazin/Mistral-7B-peft-p-tuning-v2-8 | Kenazin | 2025-05-02T15:22:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:22:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
toilahonganh1712/Meta-Llama-3.1-8B-Travel-VungTau-LORA | toilahonganh1712 | 2025-05-02T15:22:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T15:22:04Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** toilahonganh1712
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
raulgdp/Mistral-7B-Instruct-v0.3-009 | raulgdp | 2025-05-02T15:19:50Z | 150 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T22:49:11Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: Mistral-7B-Instruct-v0.3-009
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.3-009
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1021 | 0.8658 | 100 | 1.1161 |
| 0.8726 | 1.7273 | 200 | 0.8562 |
| 0.7038 | 2.5887 | 300 | 0.6993 |
| 0.5235 | 3.4502 | 400 | 0.5873 |
| 0.4779 | 4.3117 | 500 | 0.5180 |
| 0.3833 | 5.1732 | 600 | 0.4624 |
| 0.3858 | 6.0346 | 700 | 0.4272 |
| 0.3365 | 6.9004 | 800 | 0.4010 |
| 0.3222 | 7.7619 | 900 | 0.3826 |
| 0.3179 | 8.6234 | 1000 | 0.3714 |
| 0.2675 | 9.4848 | 1100 | 0.3631 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
hadidev/finetuning-sentiment-uselection2024 | hadidev | 2025-05-02T15:19:04Z | 1 | 0 | null | [
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-04-27T20:07:10Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
--- |
psyonp/Final-Llama-Token-Count-Response | psyonp | 2025-05-02T15:16:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T15:11:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
linoyts/hidream-yarn-art-lora-v2-trainer | linoyts | 2025-05-02T15:12:05Z | 17 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"hidream",
"hidream-diffusers",
"template:sd-lora",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:adapter:HiDream-ai/HiDream-I1-Full",
"license:mit",
"region:us"
] | text-to-image | 2025-04-22T11:58:33Z | ---
base_model: HiDream-ai/HiDream-I1-Full
library_name: diffusers
license: mit
instance_prompt: a dog, yarn art style
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- hidream
- hidream-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- hidream
- hidream-diffusers
- template:sd-lora
widget:
- text: yoda, yarn art style
output:
url: image_1.png
- text: cookie monster, yarn art style
output:
url: cookie.png
- text: the joker, yarn art style
output:
url: joker.png
- text: a capybara in a bubble batch, yarn art style
output:
url: capy.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiDream Image DreamBooth LoRA - linoyts/hidream-yarn-art-lora-v2-trainer
<Gallery />
## Model description
These are linoyts/hidream-yarn-art-lora-v2-trainer DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md).
## Trigger words
You should use `a dog, yarn art style` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](linoyts/hidream-yarn-art-lora-v2-trainer/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
>>> import torch
>>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
>>> from diffusers import HiDreamImagePipeline
>>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
>>> text_encoder_4 = LlamaForCausalLM.from_pretrained(
... "meta-llama/Meta-Llama-3.1-8B-Instruct",
... output_hidden_states=True,
... output_attentions=True,
... torch_dtype=torch.bfloat16,
... )
>>> pipe = HiDreamImagePipeline.from_pretrained(
... "HiDream-ai/HiDream-I1-Full",
... tokenizer_4=tokenizer_4,
... text_encoder_4=text_encoder_4,
... torch_dtype=torch.bfloat16,
... )
>>> pipe.enable_model_cpu_offload()
>>> pipe.load_lora_weights(f"linoyts/hidream-yarn-art-lora-v2-trainer")
>>> image = pipe(f"a dog, yarn art style").images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
infogep/c363f9d9-5ce5-4139-90a0-72be44fcdbf6 | infogep | 2025-05-02T15:09:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T14:41:58Z | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c363f9d9-5ce5-4139-90a0-72be44fcdbf6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- ebdef80c11c8be43_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ebdef80c11c8be43_train_data.json
type:
field_instruction: prompt
field_output: generation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: infogep/c363f9d9-5ce5-4139-90a0-72be44fcdbf6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ebdef80c11c8be43_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8b4c8c80-b92d-409e-b63a-5d20d6027586
wandb_project: s56-30
wandb_run: your_name
wandb_runid: 8b4c8c80-b92d-409e-b63a-5d20d6027586
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c363f9d9-5ce5-4139-90a0-72be44fcdbf6
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1118 | 0.1658 | 200 | 1.0793 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
HatimF/LoL_Build-llama-3B | HatimF | 2025-05-02T15:08:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llm",
"llama3",
"unsloth",
"quantization",
"lora",
"fine-tuning",
"League-of-Legends",
"text-generation",
"en",
"dataset:HatimF/LoL_Build_Dataset",
"base_model:unsloth/Llama-3.2-3B-bnb-4bit",
"base_model:adapter:unsloth/Llama-3.2-3B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-27T16:26:59Z | ---
language: en
license: apache-2.0
tags:
- llm
- llama3
- unsloth
- quantization
- lora
- fine-tuning
- League-of-Legends
datasets:
- HatimF/LoL_Build_Dataset
library_name: transformers
pipeline_tag: text-generation
model_name: HatimF/LoL_Build-Llama3B
base_model: unsloth/Llama-3.2-3B-bnb-4bit
model_creator: HatimF
model_type: llama
quantized: true
bit_precision: 4
peft:
type: lora
rank: 16
alpha: 32
dropout: 0.05
training:
batch_size: 16
gradient_accumulation_steps: 1
epochs: 1
max_steps: 4772
learning_rate: 2e-4
weight_decay: 0.01
max_seq_length: 512
seed: 1515
optimizer: adamw_8bit
mixed_precision:
bf16: true
fp16_fallback: true
files:
- data/train.jsonl
- data/val.jsonl
repository_id: HatimF/LoL_Build-Llama3B
total_flos: 6.667359824212132e+17
logging_steps: 5
eval:
eval_loss: 0.14561191201210022
eval_runtime_sec: 1631.2678
eval_samples_per_second: 5.2
eval_steps_per_second: 0.65
---
# Model Card for HatimF/LoL_Build-Llama3B
This model generates League of Legends for patch 15.7.1 (April 2025) item build recommendations and gameplay advice from natural language prompts. It is fine-tuned on a custom JSONL dataset of prompt/completion pairs, using 4-bit quantization and LoRA to optimize inference speed and memory usage.
## Model Details
### Model Description
HatimF/LoL_Build-Llama3B is a causal language model based on Meta’s LLaMA-3 (3B parameters), quantized to 4-bit and fine-tuned with Low-Rank Adaptation (LoRA). It specializes in producing strategic item builds and rationale for League of Legends champions under various game scenarios.
- **Model type:** Causal language model with 4-bit quantization & LoRA
- **Language(s):** English
- **Finetuned from:** unsloth/Llama-3.2-3B-bnb-4bit
## Uses
### Direct Use
Generate in-game build suggestions, draft analysis, or coaching tips for League of Legends. Input a prompt describing the champion, role, and enemy composition; output is a textual build recommendation with item order and reasoning.
### Out-of-Scope Use
Not suitable for real-time gameplay autopiloting, ranked match automation, or as a definitive source of strategy without human oversight.
## Training Details
### Training Procedure
- Loaded base model in 4-bit with unsloth.FastLanguageModel and bfloat16 where supported.
- Applied LoRA adapters (r=16, alpha=32, dropout=0.05).
- Tokenized concatenated prompt+completion pairs, truncating to 512 tokens.
- Trained for 1 epoch (4,772 steps), batch size 16, AdamW-8bit optimizer, bf16/fp16 mixed precision.
## Evaluation
### Testing Data & Metrics
- Dataset: validation split of HatimF/LoL_Build_Dataset
- Metric: evaluation loss (eval_loss = 0.1456)
|
quickstep3621/dippy-v3-1-10 | quickstep3621 | 2025-05-02T12:27:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T12:27:55Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
JavaEdge/qwen-rotten-tomatoes | JavaEdge | 2025-05-02T12:26:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T12:06:36Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: qwen-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
ttdamowang/qwen3-1.7b_medical | ttdamowang | 2025-05-02T12:25:44Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:38:36Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bts-Wiki-Paro-Aarti-Viral-Videos/wATCH.Paro.Aarti.viral.video.Link.Official.Tutorial | Bts-Wiki-Paro-Aarti-Viral-Videos | 2025-05-02T12:21:39Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T12:19:00Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/yd5fmvay?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor Paro Aarti Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Paro Aarti, a young and talented digital creator, recently became famous thanks to this interesting video.
L𝚎aᴋed Video Actor Paro Aarti Original Video V𝐢ral Video L𝚎aᴋed on X Twitter
Actor Paro Aarti Original Video video oficial twitter
L𝚎aᴋed Video Actor Paro Aarti Original Video V𝐢ral Video L𝚎aᴋed on X Twitter. |
ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF | ykarout | 2025-05-02T12:20:55Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ykarout/phi4-deepseek-r1-0205-16bit",
"base_model:quantized:ykarout/phi4-deepseek-r1-0205-16bit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T12:20:14Z | ---
base_model: ykarout/phi4-deepseek-r1-0205-16bit
tags:
- llama-cpp
- gguf-my-repo
---
# ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF
This model was converted to GGUF format from [`ykarout/phi4-deepseek-r1-0205-16bit`](https://huggingface.co/ykarout/phi4-deepseek-r1-0205-16bit) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ykarout/phi4-deepseek-r1-0205-16bit) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ykarout/phi4-deepseek-r1-0205-16bit-Q4_K_M-GGUF --hf-file phi4-deepseek-r1-0205-16bit-q4_k_m.gguf -c 2048
```
|
vanwdai/byt5-base-finetuned-nlpaug-ocr | vanwdai | 2025-05-02T12:20:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-02T12:18:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zelk12/MT2-gemma-3-12B-Q6_K-GGUF | zelk12 | 2025-05-02T12:16:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:zelk12/MT2-gemma-3-12B",
"base_model:quantized:zelk12/MT2-gemma-3-12B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-02T12:15:40Z | ---
base_model: zelk12/MT2-gemma-3-12B
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# zelk12/MT2-gemma-3-12B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT2-gemma-3-12B`](https://huggingface.co/zelk12/MT2-gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT2-gemma-3-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT2-gemma-3-12B-Q6_K-GGUF --hf-file mt2-gemma-3-12b-q6_k.gguf -c 2048
```
|
cristiantica143/physics_adapted_llama_3.2_3b | cristiantica143 | 2025-05-02T12:15:20Z | 0 | 0 | transformers | [
"transformers",
"llama",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-30T14:37:22Z | ---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** cristiantica143
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ba2han/Qwen-3-14B-Gemini-v0.1 | Ba2han | 2025-05-02T12:11:35Z | 0 | 1 | null | [
"safetensors",
"qwen3",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:mit",
"region:us"
] | null | 2025-05-01T23:32:51Z | ---
license: mit
language:
- en
base_model:
- Qwen/Qwen3-14B
---
> [!NOTE]
> **Use "You are an assistant with reasoning capabilities." system message to trigger gemini-style thinking.**
# Training Dataset
- The fine-tuning dataset consists of ~300 diverse examples, 160 of which are directly from Gemini 2.5 Pro.
# Model
- Trained on unsloth version of Qwen3-14B (instruct).
- No benchmark data for now.
**Keep in mind that it's slightly overfit since the training dataset was quite small. The model can be used to create more high quality examples for further training.**
 |
voxaiorg/drivethru-bk-adapter | voxaiorg | 2025-05-02T12:09:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:adapter:meta-llama/Llama-3.3-70B-Instruct",
"region:us"
] | null | 2025-05-02T12:08:38Z | ---
base_model: meta-llama/Llama-3.3-70B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
unrented5443/sn11-v4-2-8 | unrented5443 | 2025-05-02T12:08:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T12:08:19Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
faifaistone/hcw-ci | faifaistone | 2025-05-02T12:08:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T11:20:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HCW
---
# Hcw Ci
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HCW` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HCW",
"lora_weights": "https://huggingface.co/faifaistone/hcw-ci/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('faifaistone/hcw-ci', weight_name='lora.safetensors')
image = pipeline('HCW').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/faifaistone/hcw-ci/discussions) to add images that show off what you’ve made with this LoRA.
|
nttaii/userReviewed-20250502160549-review_master_v2_label_smoothing_05_1e-6 | nttaii | 2025-05-02T12:06:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-02T09:05:57Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: userReviewed-20250502160549-review_master_v2_label_smoothing_05_1e-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# userReviewed-20250502160549-review_master_v2_label_smoothing_05_1e-6
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0754
- Overall Strict Precision: 0.7420
- Overall Strict Recall: 0.4672
- Overall Strict F1: 0.5734
- Overall Ent Type Precision: 0.8666
- Overall Ent Type Recall: 0.5457
- Overall Ent Type F1: 0.6697
- Overall Partial Precision: 0.8418
- Overall Partial Recall: 0.5301
- Overall Partial F1: 0.6505
- Overall Exact Precision: 0.7698
- Overall Exact Recall: 0.4848
- Overall Exact F1: 0.5949
- Roomno Strict Precision: 0.3428
- Roomno Strict Recall: 0.6648
- Roomno Strict F1: 0.4523
- Roomno Ent Type Precision: 0.3454
- Roomno Ent Type Recall: 0.6699
- Roomno Ent Type F1: 0.4558
- Roomno Partial Precision: 0.3538
- Roomno Partial Recall: 0.6862
- Roomno Partial F1: 0.4669
- Roomno Exact Precision: 0.3518
- Roomno Exact Recall: 0.6823
- Roomno Exact F1: 0.4643
- Hotelbankaccount Strict Precision: 0.0
- Hotelbankaccount Strict Recall: 0.0
- Hotelbankaccount Strict F1: 0
- Hotelbankaccount Ent Type Precision: 0.0
- Hotelbankaccount Ent Type Recall: 0.0
- Hotelbankaccount Ent Type F1: 0
- Hotelbankaccount Partial Precision: 0.0
- Hotelbankaccount Partial Recall: 0.0
- Hotelbankaccount Partial F1: 0
- Hotelbankaccount Exact Precision: 0.0
- Hotelbankaccount Exact Recall: 0.0
- Hotelbankaccount Exact F1: 0
- Checkin Strict Precision: 0.3447
- Checkin Strict Recall: 0.8594
- Checkin Strict F1: 0.4920
- Checkin Ent Type Precision: 0.3535
- Checkin Ent Type Recall: 0.8815
- Checkin Ent Type F1: 0.5047
- Checkin Partial Precision: 0.3537
- Checkin Partial Recall: 0.8819
- Checkin Partial F1: 0.5049
- Checkin Exact Precision: 0.3484
- Checkin Exact Recall: 0.8686
- Checkin Exact F1: 0.4973
- Travellername Strict Precision: 0.2130
- Travellername Strict Recall: 0.4185
- Travellername Strict F1: 0.2823
- Travellername Ent Type Precision: 0.3157
- Travellername Ent Type Recall: 0.6203
- Travellername Ent Type F1: 0.4185
- Travellername Partial Precision: 0.2672
- Travellername Partial Recall: 0.5249
- Travellername Partial F1: 0.3541
- Travellername Exact Precision: 0.2142
- Travellername Exact Recall: 0.4208
- Travellername Exact F1: 0.2839
- Companyzipcode Strict Precision: 0.3063
- Companyzipcode Strict Recall: 0.6593
- Companyzipcode Strict F1: 0.4183
- Companyzipcode Ent Type Precision: 0.3193
- Companyzipcode Ent Type Recall: 0.6872
- Companyzipcode Ent Type F1: 0.4361
- Companyzipcode Partial Precision: 0.3356
- Companyzipcode Partial Recall: 0.7222
- Companyzipcode Partial F1: 0.4583
- Companyzipcode Exact Precision: 0.3270
- Companyzipcode Exact Recall: 0.7037
- Companyzipcode Exact F1: 0.4465
- Hotelphonenumber Strict Precision: 0.2063
- Hotelphonenumber Strict Recall: 0.6482
- Hotelphonenumber Strict F1: 0.3130
- Hotelphonenumber Ent Type Precision: 0.2760
- Hotelphonenumber Ent Type Recall: 0.8670
- Hotelphonenumber Ent Type F1: 0.4187
- Hotelphonenumber Partial Precision: 0.2418
- Hotelphonenumber Partial Recall: 0.7598
- Hotelphonenumber Partial F1: 0.3669
- Hotelphonenumber Exact Precision: 0.2070
- Hotelphonenumber Exact Recall: 0.6502
- Hotelphonenumber Exact F1: 0.3140
- Hotelemail Strict Precision: 0.2448
- Hotelemail Strict Recall: 0.8390
- Hotelemail Strict F1: 0.3791
- Hotelemail Ent Type Precision: 0.2576
- Hotelemail Ent Type Recall: 0.8827
- Hotelemail Ent Type F1: 0.3988
- Hotelemail Partial Precision: 0.2513
- Hotelemail Partial Recall: 0.8612
- Hotelemail Partial F1: 0.3891
- Hotelemail Exact Precision: 0.2449
- Hotelemail Exact Recall: 0.8392
- Hotelemail Exact F1: 0.3792
- Taxamount Strict Precision: 0.1221
- Taxamount Strict Recall: 0.7256
- Taxamount Strict F1: 0.2090
- Taxamount Ent Type Precision: 0.1394
- Taxamount Ent Type Recall: 0.8289
- Taxamount Ent Type F1: 0.2387
- Taxamount Partial Precision: 0.1320
- Taxamount Partial Recall: 0.7849
- Taxamount Partial F1: 0.2261
- Taxamount Exact Precision: 0.1229
- Taxamount Exact Recall: 0.7306
- Taxamount Exact F1: 0.2104
- Hoteladdress Strict Precision: 0.1285
- Hoteladdress Strict Recall: 0.2384
- Hoteladdress Strict F1: 0.1670
- Hoteladdress Ent Type Precision: 0.2721
- Hoteladdress Ent Type Recall: 0.5048
- Hoteladdress Ent Type F1: 0.3536
- Hoteladdress Partial Precision: 0.2065
- Hoteladdress Partial Recall: 0.3832
- Hoteladdress Partial F1: 0.2684
- Hoteladdress Exact Precision: 0.1301
- Hoteladdress Exact Recall: 0.2413
- Hoteladdress Exact F1: 0.1690
- Cardnumber Strict Precision: 0.0145
- Cardnumber Strict Recall: 0.2431
- Cardnumber Strict F1: 0.0274
- Cardnumber Ent Type Precision: 0.0206
- Cardnumber Ent Type Recall: 0.3451
- Cardnumber Ent Type F1: 0.0389
- Cardnumber Partial Precision: 0.0178
- Cardnumber Partial Recall: 0.2978
- Cardnumber Partial F1: 0.0336
- Cardnumber Exact Precision: 0.0147
- Cardnumber Exact Recall: 0.2457
- Cardnumber Exact F1: 0.0277
- Documenttype Strict Precision: 0.2667
- Documenttype Strict Recall: 0.7044
- Documenttype Strict F1: 0.3869
- Documenttype Ent Type Precision: 0.3034
- Documenttype Ent Type Recall: 0.8015
- Documenttype Ent Type F1: 0.4402
- Documenttype Partial Precision: 0.2858
- Documenttype Partial Recall: 0.7548
- Documenttype Partial F1: 0.4146
- Documenttype Exact Precision: 0.2670
- Documenttype Exact Recall: 0.7052
- Documenttype Exact F1: 0.3873
- Externalreservationnumber Strict Precision: 0.0090
- Externalreservationnumber Strict Recall: 0.2725
- Externalreservationnumber Strict F1: 0.0174
- Externalreservationnumber Ent Type Precision: 0.0106
- Externalreservationnumber Ent Type Recall: 0.3217
- Externalreservationnumber Ent Type F1: 0.0206
- Externalreservationnumber Partial Precision: 0.0186
- Externalreservationnumber Partial Recall: 0.5638
- Externalreservationnumber Partial F1: 0.0360
- Externalreservationnumber Exact Precision: 0.0161
- Externalreservationnumber Exact Recall: 0.4889
- Externalreservationnumber Exact F1: 0.0312
- Netamount Strict Precision: 0.1364
- Netamount Strict Recall: 0.6612
- Netamount Strict F1: 0.2261
- Netamount Ent Type Precision: 0.1586
- Netamount Ent Type Recall: 0.7688
- Netamount Ent Type F1: 0.2629
- Netamount Partial Precision: 0.1519
- Netamount Partial Recall: 0.7364
- Netamount Partial F1: 0.2518
- Netamount Exact Precision: 0.1401
- Netamount Exact Recall: 0.6795
- Netamount Exact F1: 0.2324
- Grossamount Strict Precision: 0.2428
- Grossamount Strict Recall: 0.5930
- Grossamount Strict F1: 0.3445
- Grossamount Ent Type Precision: 0.2642
- Grossamount Ent Type Recall: 0.6454
- Grossamount Ent Type F1: 0.3749
- Grossamount Partial Precision: 0.2587
- Grossamount Partial Recall: 0.6318
- Grossamount Partial F1: 0.3671
- Grossamount Exact Precision: 0.2472
- Grossamount Exact Recall: 0.6038
- Grossamount Exact F1: 0.3508
- Invoicenumber Strict Precision: 0.3011
- Invoicenumber Strict Recall: 0.8259
- Invoicenumber Strict F1: 0.4414
- Invoicenumber Ent Type Precision: 0.3204
- Invoicenumber Ent Type Recall: 0.8788
- Invoicenumber Ent Type F1: 0.4696
- Invoicenumber Partial Precision: 0.3136
- Invoicenumber Partial Recall: 0.8600
- Invoicenumber Partial F1: 0.4596
- Invoicenumber Exact Precision: 0.3036
- Invoicenumber Exact Recall: 0.8325
- Invoicenumber Exact F1: 0.4449
- Checkout Strict Precision: 0.3432
- Checkout Strict Recall: 0.8614
- Checkout Strict F1: 0.4908
- Checkout Ent Type Precision: 0.3489
- Checkout Ent Type Recall: 0.8758
- Checkout Ent Type F1: 0.4990
- Checkout Partial Precision: 0.3488
- Checkout Partial Recall: 0.8756
- Checkout Partial F1: 0.4989
- Checkout Exact Precision: 0.3456
- Checkout Exact Recall: 0.8675
- Checkout Exact F1: 0.4943
- Bookingnumber Strict Precision: 0.0349
- Bookingnumber Strict Recall: 0.4464
- Bookingnumber Strict F1: 0.0647
- Bookingnumber Ent Type Precision: 0.0447
- Bookingnumber Ent Type Recall: 0.5718
- Bookingnumber Ent Type F1: 0.0829
- Bookingnumber Partial Precision: 0.0457
- Bookingnumber Partial Recall: 0.5852
- Bookingnumber Partial F1: 0.0848
- Bookingnumber Exact Precision: 0.0400
- Bookingnumber Exact Recall: 0.5117
- Bookingnumber Exact F1: 0.0742
- Hotelzipcode Strict Precision: 0.2598
- Hotelzipcode Strict Recall: 0.5574
- Hotelzipcode Strict F1: 0.3544
- Hotelzipcode Ent Type Precision: 0.2811
- Hotelzipcode Ent Type Recall: 0.6030
- Hotelzipcode Ent Type F1: 0.3834
- Hotelzipcode Partial Precision: 0.2733
- Hotelzipcode Partial Recall: 0.5863
- Hotelzipcode Partial F1: 0.3728
- Hotelzipcode Exact Precision: 0.2619
- Hotelzipcode Exact Recall: 0.5620
- Hotelzipcode Exact F1: 0.3573
- Hotelcountry Strict Precision: 0.0632
- Hotelcountry Strict Recall: 0.4168
- Hotelcountry Strict F1: 0.1098
- Hotelcountry Ent Type Precision: 0.0650
- Hotelcountry Ent Type Recall: 0.4290
- Hotelcountry Ent Type F1: 0.1130
- Hotelcountry Partial Precision: 0.0700
- Hotelcountry Partial Recall: 0.4617
- Hotelcountry Partial F1: 0.1216
- Hotelcountry Exact Precision: 0.0683
- Hotelcountry Exact Recall: 0.4501
- Hotelcountry Exact F1: 0.1185
- Companycountry Strict Precision: 0.2047
- Companycountry Strict Recall: 0.6578
- Companycountry Strict F1: 0.3122
- Companycountry Ent Type Precision: 0.2079
- Companycountry Ent Type Recall: 0.6681
- Companycountry Ent Type F1: 0.3172
- Companycountry Partial Precision: 0.2264
- Companycountry Partial Recall: 0.7273
- Companycountry Partial F1: 0.3453
- Companycountry Exact Precision: 0.2235
- Companycountry Exact Recall: 0.7180
- Companycountry Exact F1: 0.3409
- Currencycode Strict Precision: 0.2034
- Currencycode Strict Recall: 0.0843
- Currencycode Strict F1: 0.1192
- Currencycode Ent Type Precision: 0.2045
- Currencycode Ent Type Recall: 0.0848
- Currencycode Ent Type F1: 0.1199
- Currencycode Partial Precision: 0.2055
- Currencycode Partial Recall: 0.0852
- Currencycode Partial F1: 0.1205
- Currencycode Exact Precision: 0.2041
- Currencycode Exact Recall: 0.0846
- Currencycode Exact F1: 0.1196
- Invoicedate Strict Precision: 0.3131
- Invoicedate Strict Recall: 0.8613
- Invoicedate Strict F1: 0.4593
- Invoicedate Ent Type Precision: 0.3240
- Invoicedate Ent Type Recall: 0.8913
- Invoicedate Ent Type F1: 0.4753
- Invoicedate Partial Precision: 0.3217
- Invoicedate Partial Recall: 0.8848
- Invoicedate Partial F1: 0.4718
- Invoicedate Exact Precision: 0.3160
- Invoicedate Exact Recall: 0.8691
- Invoicedate Exact F1: 0.4635
- Hotelcity Strict Precision: 0.2398
- Hotelcity Strict Recall: 0.4699
- Hotelcity Strict F1: 0.3175
- Hotelcity Ent Type Precision: 0.2714
- Hotelcity Ent Type Recall: 0.5318
- Hotelcity Ent Type F1: 0.3594
- Hotelcity Partial Precision: 0.2618
- Hotelcity Partial Recall: 0.5130
- Hotelcity Partial F1: 0.3466
- Hotelcity Exact Precision: 0.2437
- Hotelcity Exact Recall: 0.4775
- Hotelcity Exact F1: 0.3227
- Companyaddress Strict Precision: 0.2317
- Companyaddress Strict Recall: 0.4040
- Companyaddress Strict F1: 0.2945
- Companyaddress Ent Type Precision: 0.3111
- Companyaddress Ent Type Recall: 0.5425
- Companyaddress Ent Type F1: 0.3954
- Companyaddress Partial Precision: 0.3012
- Companyaddress Partial Recall: 0.5253
- Companyaddress Partial F1: 0.3829
- Companyaddress Exact Precision: 0.2498
- Companyaddress Exact Recall: 0.4356
- Companyaddress Exact F1: 0.3175
- Hotelname Strict Precision: 0.1183
- Hotelname Strict Recall: 0.1515
- Hotelname Strict F1: 0.1328
- Hotelname Ent Type Precision: 0.2759
- Hotelname Ent Type Recall: 0.3533
- Hotelname Ent Type F1: 0.3099
- Hotelname Partial Precision: 0.2031
- Hotelname Partial Recall: 0.2600
- Hotelname Partial F1: 0.2281
- Hotelname Exact Precision: 0.1197
- Hotelname Exact Recall: 0.1532
- Hotelname Exact F1: 0.1344
- Hotelvatnumber Strict Precision: 0.1536
- Hotelvatnumber Strict Recall: 0.5915
- Hotelvatnumber Strict F1: 0.2438
- Hotelvatnumber Ent Type Precision: 0.1715
- Hotelvatnumber Ent Type Recall: 0.6605
- Hotelvatnumber Ent Type F1: 0.2723
- Hotelvatnumber Partial Precision: 0.1803
- Hotelvatnumber Partial Recall: 0.6944
- Hotelvatnumber Partial F1: 0.2863
- Hotelvatnumber Exact Precision: 0.1700
- Hotelvatnumber Exact Recall: 0.6547
- Hotelvatnumber Exact F1: 0.2699
- Companyname Strict Precision: 0.1909
- Companyname Strict Recall: 0.2664
- Companyname Strict F1: 0.2224
- Companyname Ent Type Precision: 0.2989
- Companyname Ent Type Recall: 0.4172
- Companyname Ent Type F1: 0.3483
- Companyname Partial Precision: 0.2780
- Companyname Partial Recall: 0.3879
- Companyname Partial F1: 0.3239
- Companyname Exact Precision: 0.2026
- Companyname Exact Recall: 0.2828
- Companyname Exact F1: 0.2361
- Companyvatnumber Strict Precision: 0.0881
- Companyvatnumber Strict Recall: 0.3209
- Companyvatnumber Strict F1: 0.1382
- Companyvatnumber Ent Type Precision: 0.0944
- Companyvatnumber Ent Type Recall: 0.3440
- Companyvatnumber Ent Type F1: 0.1482
- Companyvatnumber Partial Precision: 0.1513
- Companyvatnumber Partial Recall: 0.5511
- Companyvatnumber Partial F1: 0.2374
- Companyvatnumber Exact Precision: 0.1403
- Companyvatnumber Exact Recall: 0.5113
- Companyvatnumber Exact F1: 0.2202
- Cardtype Strict Precision: 0.0792
- Cardtype Strict Recall: 0.4133
- Cardtype Strict F1: 0.1330
- Cardtype Ent Type Precision: 0.0932
- Cardtype Ent Type Recall: 0.4865
- Cardtype Ent Type F1: 0.1565
- Cardtype Partial Precision: 0.0865
- Cardtype Partial Recall: 0.4513
- Cardtype Partial F1: 0.1452
- Cardtype Exact Precision: 0.0794
- Cardtype Exact Recall: 0.4141
- Cardtype Exact F1: 0.1332
- Hotelfaxnumber Strict Precision: 0.1641
- Hotelfaxnumber Strict Recall: 0.7207
- Hotelfaxnumber Strict F1: 0.2674
- Hotelfaxnumber Ent Type Precision: 0.2003
- Hotelfaxnumber Ent Type Recall: 0.8796
- Hotelfaxnumber Ent Type F1: 0.3263
- Hotelfaxnumber Partial Precision: 0.1830
- Hotelfaxnumber Partial Recall: 0.8035
- Hotelfaxnumber Partial F1: 0.2981
- Hotelfaxnumber Exact Precision: 0.1647
- Hotelfaxnumber Exact Recall: 0.7234
- Hotelfaxnumber Exact F1: 0.2684
- Companycity Strict Precision: 0.2766
- Companycity Strict Recall: 0.5484
- Companycity Strict F1: 0.3677
- Companycity Ent Type Precision: 0.3071
- Companycity Ent Type Recall: 0.6089
- Companycity Ent Type F1: 0.4083
- Companycity Partial Precision: 0.3212
- Companycity Partial Recall: 0.6368
- Companycity Partial F1: 0.4270
- Companycity Exact Precision: 0.2989
- Companycity Exact Recall: 0.5928
- Companycity Exact F1: 0.3975
- Pagenumber Strict Precision: 0.1668
- Pagenumber Strict Recall: 0.7204
- Pagenumber Strict F1: 0.2709
- Pagenumber Ent Type Precision: 0.1701
- Pagenumber Ent Type Recall: 0.7349
- Pagenumber Ent Type F1: 0.2763
- Pagenumber Partial Precision: 0.1686
- Pagenumber Partial Recall: 0.7280
- Pagenumber Partial F1: 0.2737
- Pagenumber Exact Precision: 0.1669
- Pagenumber Exact Recall: 0.7207
- Pagenumber Exact F1: 0.2710
- Hoteltaxcode Strict Precision: 0.0603
- Hoteltaxcode Strict Recall: 0.5545
- Hoteltaxcode Strict F1: 0.1088
- Hoteltaxcode Ent Type Precision: 0.0662
- Hoteltaxcode Ent Type Recall: 0.6080
- Hoteltaxcode Ent Type F1: 0.1193
- Hoteltaxcode Partial Precision: 0.0657
- Hoteltaxcode Partial Recall: 0.6038
- Hoteltaxcode Partial F1: 0.1185
- Hoteltaxcode Exact Precision: 0.0626
- Hoteltaxcode Exact Recall: 0.5757
- Hoteltaxcode Exact F1: 0.1130
- Reservationnumber Strict Precision: 0.0515
- Reservationnumber Strict Recall: 0.3535
- Reservationnumber Strict F1: 0.0899
- Reservationnumber Ent Type Precision: 0.0538
- Reservationnumber Ent Type Recall: 0.3691
- Reservationnumber Ent Type F1: 0.0939
- Reservationnumber Partial Precision: 0.0767
- Reservationnumber Partial Recall: 0.5263
- Reservationnumber Partial F1: 0.1339
- Reservationnumber Exact Precision: 0.0728
- Reservationnumber Exact Recall: 0.4994
- Reservationnumber Exact F1: 0.1270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 8
- label_smoothing_factor: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Strict Precision | Overall Strict Recall | Overall Strict F1 | Overall Ent Type Precision | Overall Ent Type Recall | Overall Ent Type F1 | Overall Partial Precision | Overall Partial Recall | Overall Partial F1 | Overall Exact Precision | Overall Exact Recall | Overall Exact F1 | Roomno Strict Precision | Roomno Strict Recall | Roomno Strict F1 | Roomno Ent Type Precision | Roomno Ent Type Recall | Roomno Ent Type F1 | Roomno Partial Precision | Roomno Partial Recall | Roomno Partial F1 | Roomno Exact Precision | Roomno Exact Recall | Roomno Exact F1 | Hotelbankaccount Strict Precision | Hotelbankaccount Strict Recall | Hotelbankaccount Strict F1 | Hotelbankaccount Ent Type Precision | Hotelbankaccount Ent Type Recall | Hotelbankaccount Ent Type F1 | Hotelbankaccount Partial Precision | Hotelbankaccount Partial Recall | Hotelbankaccount Partial F1 | Hotelbankaccount Exact Precision | Hotelbankaccount Exact Recall | Hotelbankaccount Exact F1 | Checkin Strict Precision | Checkin Strict Recall | Checkin Strict F1 | Checkin Ent Type Precision | Checkin Ent Type Recall | Checkin Ent Type F1 | Checkin Partial Precision | Checkin Partial Recall | Checkin Partial F1 | Checkin Exact Precision | Checkin Exact Recall | Checkin Exact F1 | Travellername Strict Precision | Travellername Strict Recall | Travellername Strict F1 | Travellername Ent Type Precision | Travellername Ent Type Recall | Travellername Ent Type F1 | Travellername Partial Precision | Travellername Partial Recall | Travellername Partial F1 | Travellername Exact Precision | Travellername Exact Recall | Travellername Exact F1 | Companyzipcode Strict Precision | Companyzipcode Strict Recall | Companyzipcode Strict F1 | Companyzipcode Ent Type Precision | Companyzipcode Ent Type Recall | Companyzipcode Ent Type F1 | Companyzipcode Partial Precision | Companyzipcode Partial Recall | Companyzipcode Partial F1 | Companyzipcode Exact Precision | Companyzipcode Exact Recall | Companyzipcode Exact F1 | Hotelphonenumber Strict Precision | Hotelphonenumber Strict Recall | Hotelphonenumber Strict F1 | Hotelphonenumber Ent Type Precision | Hotelphonenumber Ent Type Recall | Hotelphonenumber Ent Type F1 | Hotelphonenumber Partial Precision | Hotelphonenumber Partial Recall | Hotelphonenumber Partial F1 | Hotelphonenumber Exact Precision | Hotelphonenumber Exact Recall | Hotelphonenumber Exact F1 | Hotelemail Strict Precision | Hotelemail Strict Recall | Hotelemail Strict F1 | Hotelemail Ent Type Precision | Hotelemail Ent Type Recall | Hotelemail Ent Type F1 | Hotelemail Partial Precision | Hotelemail Partial Recall | Hotelemail Partial F1 | Hotelemail Exact Precision | Hotelemail Exact Recall | Hotelemail Exact F1 | Taxamount Strict Precision | Taxamount Strict Recall | Taxamount Strict F1 | Taxamount Ent Type Precision | Taxamount Ent Type Recall | Taxamount Ent Type F1 | Taxamount Partial Precision | Taxamount Partial Recall | Taxamount Partial F1 | Taxamount Exact Precision | Taxamount Exact Recall | Taxamount Exact F1 | Hoteladdress Strict Precision | Hoteladdress Strict Recall | Hoteladdress Strict F1 | Hoteladdress Ent Type Precision | Hoteladdress Ent Type Recall | Hoteladdress Ent Type F1 | Hoteladdress Partial Precision | Hoteladdress Partial Recall | Hoteladdress Partial F1 | Hoteladdress Exact Precision | Hoteladdress Exact Recall | Hoteladdress Exact F1 | Cardnumber Strict Precision | Cardnumber Strict Recall | Cardnumber Strict F1 | Cardnumber Ent Type Precision | Cardnumber Ent Type Recall | Cardnumber Ent Type F1 | Cardnumber Partial Precision | Cardnumber Partial Recall | Cardnumber Partial F1 | Cardnumber Exact Precision | Cardnumber Exact Recall | Cardnumber Exact F1 | Documenttype Strict Precision | Documenttype Strict Recall | Documenttype Strict F1 | Documenttype Ent Type Precision | Documenttype Ent Type Recall | Documenttype Ent Type F1 | Documenttype Partial Precision | Documenttype Partial Recall | Documenttype Partial F1 | Documenttype Exact Precision | Documenttype Exact Recall | Documenttype Exact F1 | Externalreservationnumber Strict Precision | Externalreservationnumber Strict Recall | Externalreservationnumber Strict F1 | Externalreservationnumber Ent Type Precision | Externalreservationnumber Ent Type Recall | Externalreservationnumber Ent Type F1 | Externalreservationnumber Partial Precision | Externalreservationnumber Partial Recall | Externalreservationnumber Partial F1 | Externalreservationnumber Exact Precision | Externalreservationnumber Exact Recall | Externalreservationnumber Exact F1 | Netamount Strict Precision | Netamount Strict Recall | Netamount Strict F1 | Netamount Ent Type Precision | Netamount Ent Type Recall | Netamount Ent Type F1 | Netamount Partial Precision | Netamount Partial Recall | Netamount Partial F1 | Netamount Exact Precision | Netamount Exact Recall | Netamount Exact F1 | Grossamount Strict Precision | Grossamount Strict Recall | Grossamount Strict F1 | Grossamount Ent Type Precision | Grossamount Ent Type Recall | Grossamount Ent Type F1 | Grossamount Partial Precision | Grossamount Partial Recall | Grossamount Partial F1 | Grossamount Exact Precision | Grossamount Exact Recall | Grossamount Exact F1 | Invoicenumber Strict Precision | Invoicenumber Strict Recall | Invoicenumber Strict F1 | Invoicenumber Ent Type Precision | Invoicenumber Ent Type Recall | Invoicenumber Ent Type F1 | Invoicenumber Partial Precision | Invoicenumber Partial Recall | Invoicenumber Partial F1 | Invoicenumber Exact Precision | Invoicenumber Exact Recall | Invoicenumber Exact F1 | Checkout Strict Precision | Checkout Strict Recall | Checkout Strict F1 | Checkout Ent Type Precision | Checkout Ent Type Recall | Checkout Ent Type F1 | Checkout Partial Precision | Checkout Partial Recall | Checkout Partial F1 | Checkout Exact Precision | Checkout Exact Recall | Checkout Exact F1 | Bookingnumber Strict Precision | Bookingnumber Strict Recall | Bookingnumber Strict F1 | Bookingnumber Ent Type Precision | Bookingnumber Ent Type Recall | Bookingnumber Ent Type F1 | Bookingnumber Partial Precision | Bookingnumber Partial Recall | Bookingnumber Partial F1 | Bookingnumber Exact Precision | Bookingnumber Exact Recall | Bookingnumber Exact F1 | Hotelzipcode Strict Precision | Hotelzipcode Strict Recall | Hotelzipcode Strict F1 | Hotelzipcode Ent Type Precision | Hotelzipcode Ent Type Recall | Hotelzipcode Ent Type F1 | Hotelzipcode Partial Precision | Hotelzipcode Partial Recall | Hotelzipcode Partial F1 | Hotelzipcode Exact Precision | Hotelzipcode Exact Recall | Hotelzipcode Exact F1 | Hotelcountry Strict Precision | Hotelcountry Strict Recall | Hotelcountry Strict F1 | Hotelcountry Ent Type Precision | Hotelcountry Ent Type Recall | Hotelcountry Ent Type F1 | Hotelcountry Partial Precision | Hotelcountry Partial Recall | Hotelcountry Partial F1 | Hotelcountry Exact Precision | Hotelcountry Exact Recall | Hotelcountry Exact F1 | Companycountry Strict Precision | Companycountry Strict Recall | Companycountry Strict F1 | Companycountry Ent Type Precision | Companycountry Ent Type Recall | Companycountry Ent Type F1 | Companycountry Partial Precision | Companycountry Partial Recall | Companycountry Partial F1 | Companycountry Exact Precision | Companycountry Exact Recall | Companycountry Exact F1 | Currencycode Strict Precision | Currencycode Strict Recall | Currencycode Strict F1 | Currencycode Ent Type Precision | Currencycode Ent Type Recall | Currencycode Ent Type F1 | Currencycode Partial Precision | Currencycode Partial Recall | Currencycode Partial F1 | Currencycode Exact Precision | Currencycode Exact Recall | Currencycode Exact F1 | Invoicedate Strict Precision | Invoicedate Strict Recall | Invoicedate Strict F1 | Invoicedate Ent Type Precision | Invoicedate Ent Type Recall | Invoicedate Ent Type F1 | Invoicedate Partial Precision | Invoicedate Partial Recall | Invoicedate Partial F1 | Invoicedate Exact Precision | Invoicedate Exact Recall | Invoicedate Exact F1 | Hotelcity Strict Precision | Hotelcity Strict Recall | Hotelcity Strict F1 | Hotelcity Ent Type Precision | Hotelcity Ent Type Recall | Hotelcity Ent Type F1 | Hotelcity Partial Precision | Hotelcity Partial Recall | Hotelcity Partial F1 | Hotelcity Exact Precision | Hotelcity Exact Recall | Hotelcity Exact F1 | Companyaddress Strict Precision | Companyaddress Strict Recall | Companyaddress Strict F1 | Companyaddress Ent Type Precision | Companyaddress Ent Type Recall | Companyaddress Ent Type F1 | Companyaddress Partial Precision | Companyaddress Partial Recall | Companyaddress Partial F1 | Companyaddress Exact Precision | Companyaddress Exact Recall | Companyaddress Exact F1 | Hotelvatnumber Strict Precision | Hotelvatnumber Strict Recall | Hotelvatnumber Strict F1 | Hotelvatnumber Ent Type Precision | Hotelvatnumber Ent Type Recall | Hotelvatnumber Ent Type F1 | Hotelvatnumber Partial Precision | Hotelvatnumber Partial Recall | Hotelvatnumber Partial F1 | Hotelvatnumber Exact Precision | Hotelvatnumber Exact Recall | Hotelvatnumber Exact F1 | Hotelname Strict Precision | Hotelname Strict Recall | Hotelname Strict F1 | Hotelname Ent Type Precision | Hotelname Ent Type Recall | Hotelname Ent Type F1 | Hotelname Partial Precision | Hotelname Partial Recall | Hotelname Partial F1 | Hotelname Exact Precision | Hotelname Exact Recall | Hotelname Exact F1 | Companyname Strict Precision | Companyname Strict Recall | Companyname Strict F1 | Companyname Ent Type Precision | Companyname Ent Type Recall | Companyname Ent Type F1 | Companyname Partial Precision | Companyname Partial Recall | Companyname Partial F1 | Companyname Exact Precision | Companyname Exact Recall | Companyname Exact F1 | Companyvatnumber Strict Precision | Companyvatnumber Strict Recall | Companyvatnumber Strict F1 | Companyvatnumber Ent Type Precision | Companyvatnumber Ent Type Recall | Companyvatnumber Ent Type F1 | Companyvatnumber Partial Precision | Companyvatnumber Partial Recall | Companyvatnumber Partial F1 | Companyvatnumber Exact Precision | Companyvatnumber Exact Recall | Companyvatnumber Exact F1 | Cardtype Strict Precision | Cardtype Strict Recall | Cardtype Strict F1 | Cardtype Ent Type Precision | Cardtype Ent Type Recall | Cardtype Ent Type F1 | Cardtype Partial Precision | Cardtype Partial Recall | Cardtype Partial F1 | Cardtype Exact Precision | Cardtype Exact Recall | Cardtype Exact F1 | Hotelfaxnumber Strict Precision | Hotelfaxnumber Strict Recall | Hotelfaxnumber Strict F1 | Hotelfaxnumber Ent Type Precision | Hotelfaxnumber Ent Type Recall | Hotelfaxnumber Ent Type F1 | Hotelfaxnumber Partial Precision | Hotelfaxnumber Partial Recall | Hotelfaxnumber Partial F1 | Hotelfaxnumber Exact Precision | Hotelfaxnumber Exact Recall | Hotelfaxnumber Exact F1 | Companycity Strict Precision | Companycity Strict Recall | Companycity Strict F1 | Companycity Ent Type Precision | Companycity Ent Type Recall | Companycity Ent Type F1 | Companycity Partial Precision | Companycity Partial Recall | Companycity Partial F1 | Companycity Exact Precision | Companycity Exact Recall | Companycity Exact F1 | Pagenumber Strict Precision | Pagenumber Strict Recall | Pagenumber Strict F1 | Pagenumber Ent Type Precision | Pagenumber Ent Type Recall | Pagenumber Ent Type F1 | Pagenumber Partial Precision | Pagenumber Partial Recall | Pagenumber Partial F1 | Pagenumber Exact Precision | Pagenumber Exact Recall | Pagenumber Exact F1 | Hoteltaxcode Strict Precision | Hoteltaxcode Strict Recall | Hoteltaxcode Strict F1 | Hoteltaxcode Ent Type Precision | Hoteltaxcode Ent Type Recall | Hoteltaxcode Ent Type F1 | Hoteltaxcode Partial Precision | Hoteltaxcode Partial Recall | Hoteltaxcode Partial F1 | Hoteltaxcode Exact Precision | Hoteltaxcode Exact Recall | Hoteltaxcode Exact F1 | Reservationnumber Strict Precision | Reservationnumber Strict Recall | Reservationnumber Strict F1 | Reservationnumber Ent Type Precision | Reservationnumber Ent Type Recall | Reservationnumber Ent Type F1 | Reservationnumber Partial Precision | Reservationnumber Partial Recall | Reservationnumber Partial F1 | Reservationnumber Exact Precision | Reservationnumber Exact Recall | Reservationnumber Exact F1 |
|:-------------:|:------:|:----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------------:|:--------------------:|:----------------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:------------------------:|:---------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------------:|:--------------------:|:----------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:---------------------------:|:------------------------:|:--------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:--------------------------:|:-----------------------:|:-------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------------------:|:---------------------------------------:|:-----------------------------------:|:--------------------------------------------:|:-----------------------------------------:|:-------------------------------------:|:-------------------------------------------:|:----------------------------------------:|:------------------------------------:|:-----------------------------------------:|:--------------------------------------:|:----------------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------:|:----------------------:|:------------------:|:---------------------------:|:------------------------:|:--------------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:----------------------------:|:-------------------------:|:---------------------:|:---------------------------:|:------------------------:|:--------------------:|:-------------------------:|:----------------------:|:------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:-------------------------:|:----------------------:|:------------------:|:---------------------------:|:------------------------:|:--------------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:-------------------------------:|:----------------------------:|:------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|:--------------------------------:|:-----------------------------:|:-------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:---------------------------:|:------------------------:|:--------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:-------------------------------:|:----------------------------:|:------------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:----------------------------------:|:-------------------------------:|:---------------------------:|:------------------------------------:|:---------------------------------:|:-----------------------------:|:-----------------------------------:|:--------------------------------:|:----------------------------:|:---------------------------------:|:------------------------------:|:--------------------------:|
| No log | 0.9976 | 78 | 0.9056 | 0.7721 | 0.5682 | 0.6546 | 0.8710 | 0.6411 | 0.7386 | 0.8369 | 0.6160 | 0.7096 | 0.7842 | 0.5772 | 0.6649 | 0.2920 | 0.7875 | 0.4260 | 0.2937 | 0.7922 | 0.4285 | 0.2969 | 0.8009 | 0.4332 | 0.2958 | 0.7979 | 0.4316 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0000 | 0.0172 | 0.0000 | 0.0 | 0.0 | 0 | 0.2909 | 0.8909 | 0.4386 | 0.2973 | 0.9103 | 0.4482 | 0.2977 | 0.9118 | 0.4489 | 0.2940 | 0.9003 | 0.4432 | 0.1888 | 0.4679 | 0.2690 | 0.2761 | 0.6843 | 0.3934 | 0.2344 | 0.5810 | 0.3341 | 0.1901 | 0.4712 | 0.2709 | 0.2672 | 0.8991 | 0.4120 | 0.2757 | 0.9274 | 0.4250 | 0.2747 | 0.9242 | 0.4236 | 0.2697 | 0.9073 | 0.4158 | 0.1760 | 0.6877 | 0.2802 | 0.2294 | 0.8965 | 0.3653 | 0.2032 | 0.7943 | 0.3236 | 0.1765 | 0.6897 | 0.2810 | 0.2046 | 0.8720 | 0.3315 | 0.2135 | 0.9098 | 0.3458 | 0.2091 | 0.8912 | 0.3388 | 0.2047 | 0.8722 | 0.3316 | 0.1026 | 0.7595 | 0.1808 | 0.1141 | 0.8443 | 0.2010 | 0.1093 | 0.8088 | 0.1926 | 0.1033 | 0.7647 | 0.1821 | 0.1431 | 0.4034 | 0.2113 | 0.2292 | 0.6458 | 0.3383 | 0.1905 | 0.5368 | 0.2812 | 0.1443 | 0.4067 | 0.2131 | 0.0107 | 0.2824 | 0.0205 | 0.0152 | 0.4020 | 0.0292 | 0.0131 | 0.3468 | 0.0252 | 0.0108 | 0.2850 | 0.0207 | 0.2299 | 0.7458 | 0.3514 | 0.2550 | 0.8273 | 0.3898 | 0.2428 | 0.7878 | 0.3712 | 0.2301 | 0.7466 | 0.3518 | 0.0081 | 0.3473 | 0.0159 | 0.0087 | 0.3741 | 0.0171 | 0.0146 | 0.6243 | 0.0285 | 0.0136 | 0.5820 | 0.0266 | 0.1149 | 0.7171 | 0.1980 | 0.1291 | 0.8059 | 0.2225 | 0.1245 | 0.7769 | 0.2146 | 0.1170 | 0.7302 | 0.2017 | 0.2008 | 0.6338 | 0.3050 | 0.2173 | 0.6857 | 0.3300 | 0.2118 | 0.6685 | 0.3217 | 0.2031 | 0.6409 | 0.3084 | 0.2568 | 0.8585 | 0.3954 | 0.2716 | 0.9078 | 0.4181 | 0.2661 | 0.8894 | 0.4096 | 0.2586 | 0.8645 | 0.3982 | 0.2877 | 0.8783 | 0.4335 | 0.2923 | 0.8921 | 0.4403 | 0.2917 | 0.8903 | 0.4394 | 0.2892 | 0.8828 | 0.4357 | 0.0380 | 0.5013 | 0.0706 | 0.0435 | 0.5738 | 0.0808 | 0.0464 | 0.6131 | 0.0863 | 0.0433 | 0.5712 | 0.0805 | 0.2222 | 0.6826 | 0.3353 | 0.2349 | 0.7215 | 0.3544 | 0.2308 | 0.7089 | 0.3482 | 0.2242 | 0.6885 | 0.3382 | 0.0620 | 0.6968 | 0.1139 | 0.0630 | 0.7075 | 0.1157 | 0.0656 | 0.7366 | 0.1204 | 0.0650 | 0.7302 | 0.1194 | 0.1754 | 0.8484 | 0.2908 | 0.1764 | 0.8529 | 0.2923 | 0.1799 | 0.8702 | 0.2982 | 0.1793 | 0.8670 | 0.2971 | 0.1671 | 0.0826 | 0.1105 | 0.1684 | 0.0832 | 0.1113 | 0.1690 | 0.0835 | 0.1117 | 0.1679 | 0.0829 | 0.1110 | 0.2644 | 0.8933 | 0.4081 | 0.2725 | 0.9204 | 0.4204 | 0.2708 | 0.9149 | 0.4179 | 0.2666 | 0.9006 | 0.4114 | 0.2140 | 0.6355 | 0.3202 | 0.2296 | 0.6819 | 0.3436 | 0.2270 | 0.6741 | 0.3396 | 0.2178 | 0.6467 | 0.3258 | 0.2063 | 0.6663 | 0.3151 | 0.2715 | 0.8769 | 0.4147 | 0.2423 | 0.7826 | 0.3701 | 0.2078 | 0.6711 | 0.3173 | 0.1578 | 0.6465 | 0.2537 | 0.1685 | 0.6905 | 0.2709 | 0.1773 | 0.7266 | 0.2851 | 0.1710 | 0.7005 | 0.2748 | 0.1169 | 0.2853 | 0.1659 | 0.2034 | 0.4963 | 0.2885 | 0.1620 | 0.3953 | 0.2298 | 0.1172 | 0.2859 | 0.1662 | 0.1948 | 0.5934 | 0.2933 | 0.2616 | 0.7969 | 0.3938 | 0.2340 | 0.7129 | 0.3523 | 0.1969 | 0.5998 | 0.2964 | 0.0690 | 0.6551 | 0.1248 | 0.0735 | 0.6985 | 0.1331 | 0.0769 | 0.7305 | 0.1391 | 0.0742 | 0.7052 | 0.1343 | 0.0608 | 0.4666 | 0.1076 | 0.0694 | 0.5322 | 0.1228 | 0.0654 | 0.5016 | 0.1157 | 0.0610 | 0.4682 | 0.1080 | 0.1356 | 0.7357 | 0.2290 | 0.1634 | 0.8869 | 0.2760 | 0.1501 | 0.8145 | 0.2535 | 0.1359 | 0.7375 | 0.2295 | 0.2485 | 0.8174 | 0.3811 | 0.2675 | 0.8801 | 0.4103 | 0.2644 | 0.8698 | 0.4055 | 0.2535 | 0.8338 | 0.3887 | 0.1371 | 0.7412 | 0.2314 | 0.1396 | 0.7546 | 0.2356 | 0.1384 | 0.7482 | 0.2336 | 0.1372 | 0.7415 | 0.2315 | 0.0583 | 0.5571 | 0.1055 | 0.0613 | 0.5863 | 0.1111 | 0.0629 | 0.6008 | 0.1138 | 0.0612 | 0.5847 | 0.1108 | 0.0404 | 0.4418 | 0.0740 | 0.0422 | 0.4623 | 0.0774 | 0.0553 | 0.6051 | 0.1013 | 0.0529 | 0.5795 | 0.0970 |
| No log | 1.9952 | 156 | 0.9645 | 0.7812 | 0.5378 | 0.6370 | 0.8852 | 0.6095 | 0.7219 | 0.8524 | 0.5869 | 0.6951 | 0.7962 | 0.5481 | 0.6493 | 0.3358 | 0.7662 | 0.4669 | 0.3373 | 0.7697 | 0.4691 | 0.3418 | 0.7799 | 0.4753 | 0.3406 | 0.7771 | 0.4736 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2190 | 0.4363 | 0.2917 | 0.3211 | 0.6397 | 0.4276 | 0.2726 | 0.5431 | 0.3630 | 0.2206 | 0.4395 | 0.2938 | 0.3313 | 0.8827 | 0.4818 | 0.3386 | 0.9023 | 0.4924 | 0.3391 | 0.9036 | 0.4931 | 0.3347 | 0.8919 | 0.4868 | 0.3047 | 0.8571 | 0.4496 | 0.3143 | 0.8839 | 0.4637 | 0.3155 | 0.8873 | 0.4654 | 0.3097 | 0.8711 | 0.4570 | 0.2034 | 0.6764 | 0.3128 | 0.2649 | 0.8806 | 0.4073 | 0.2349 | 0.7810 | 0.3612 | 0.2041 | 0.6786 | 0.3138 | 0.2368 | 0.8588 | 0.3713 | 0.2474 | 0.8973 | 0.3879 | 0.2422 | 0.8784 | 0.3797 | 0.2369 | 0.8590 | 0.3714 | 0.1214 | 0.7562 | 0.2092 | 0.1346 | 0.8381 | 0.2319 | 0.1289 | 0.8027 | 0.2221 | 0.1220 | 0.7596 | 0.2102 | 0.1620 | 0.3421 | 0.2199 | 0.2733 | 0.5772 | 0.3710 | 0.2236 | 0.4722 | 0.3035 | 0.1638 | 0.3460 | 0.2224 | 0.0137 | 0.2736 | 0.0260 | 0.0190 | 0.3804 | 0.0362 | 0.0165 | 0.3312 | 0.0315 | 0.0138 | 0.2766 | 0.0263 | 0.2644 | 0.7379 | 0.3893 | 0.2941 | 0.8207 | 0.4330 | 0.2797 | 0.7805 | 0.4118 | 0.2647 | 0.7386 | 0.3897 | 0.0099 | 0.3216 | 0.0193 | 0.0112 | 0.3635 | 0.0218 | 0.0184 | 0.5941 | 0.0356 | 0.0168 | 0.5448 | 0.0327 | 0.1368 | 0.7185 | 0.2299 | 0.1532 | 0.8047 | 0.2575 | 0.1481 | 0.7775 | 0.2487 | 0.1393 | 0.7318 | 0.2341 | 0.2358 | 0.6153 | 0.3409 | 0.2546 | 0.6644 | 0.3681 | 0.2489 | 0.6494 | 0.3598 | 0.2389 | 0.6234 | 0.3454 | 0.2942 | 0.8545 | 0.4377 | 0.3111 | 0.9035 | 0.4628 | 0.3047 | 0.8851 | 0.4534 | 0.2962 | 0.8603 | 0.4407 | 0.3292 | 0.8754 | 0.4785 | 0.3337 | 0.8875 | 0.4851 | 0.3335 | 0.8869 | 0.4847 | 0.3310 | 0.8801 | 0.4811 | 0.0485 | 0.4950 | 0.0884 | 0.0551 | 0.5622 | 0.1004 | 0.0600 | 0.6119 | 0.1093 | 0.0561 | 0.5719 | 0.1021 | 0.2621 | 0.6385 | 0.3716 | 0.2784 | 0.6783 | 0.3948 | 0.2728 | 0.6646 | 0.3868 | 0.2643 | 0.6439 | 0.3747 | 0.0725 | 0.5527 | 0.1281 | 0.0735 | 0.5609 | 0.1300 | 0.0786 | 0.5993 | 0.1390 | 0.0778 | 0.5933 | 0.1376 | 0.2040 | 0.8006 | 0.3251 | 0.2053 | 0.8058 | 0.3272 | 0.2125 | 0.8343 | 0.3388 | 0.2115 | 0.8303 | 0.3371 | 0.1975 | 0.0819 | 0.1158 | 0.1989 | 0.0825 | 0.1166 | 0.1996 | 0.0828 | 0.1170 | 0.1984 | 0.0823 | 0.1163 | 0.3028 | 0.8850 | 0.4512 | 0.3121 | 0.9122 | 0.4651 | 0.3104 | 0.9071 | 0.4625 | 0.3055 | 0.8929 | 0.4552 | 0.2502 | 0.5806 | 0.3497 | 0.2708 | 0.6284 | 0.3785 | 0.2670 | 0.6197 | 0.3732 | 0.2547 | 0.5912 | 0.3561 | 0.2350 | 0.6110 | 0.3395 | 0.3091 | 0.8035 | 0.4464 | 0.2773 | 0.7209 | 0.4005 | 0.2380 | 0.6186 | 0.3437 | 0.1466 | 0.2431 | 0.1829 | 0.2560 | 0.4245 | 0.3194 | 0.2047 | 0.3395 | 0.2554 | 0.1470 | 0.2438 | 0.1834 | 0.1869 | 0.6122 | 0.2864 | 0.2002 | 0.6555 | 0.3067 | 0.2134 | 0.6990 | 0.3270 | 0.2056 | 0.6735 | 0.3151 | 0.2186 | 0.4874 | 0.3019 | 0.2974 | 0.6629 | 0.4106 | 0.2696 | 0.6010 | 0.3722 | 0.2230 | 0.4971 | 0.3079 | 0.0853 | 0.5933 | 0.1492 | 0.0901 | 0.6261 | 0.1575 | 0.0981 | 0.6824 | 0.1716 | 0.0949 | 0.6596 | 0.1659 | 0.0746 | 0.4323 | 0.1272 | 0.0860 | 0.4982 | 0.1466 | 0.0807 | 0.4674 | 0.1376 | 0.0749 | 0.4339 | 0.1277 | 0.1582 | 0.7253 | 0.2597 | 0.1912 | 0.8767 | 0.3139 | 0.1754 | 0.8042 | 0.2879 | 0.1585 | 0.7268 | 0.2602 | 0.2817 | 0.7628 | 0.4115 | 0.3047 | 0.8250 | 0.4450 | 0.3037 | 0.8224 | 0.4436 | 0.2901 | 0.7856 | 0.4238 | 0.1617 | 0.7357 | 0.2651 | 0.1645 | 0.7487 | 0.2698 | 0.1632 | 0.7425 | 0.2676 | 0.1618 | 0.7361 | 0.2652 | 0.0686 | 0.5502 | 0.1220 | 0.0720 | 0.5777 | 0.1281 | 0.0734 | 0.5886 | 0.1305 | 0.0713 | 0.5720 | 0.1268 | 0.0486 | 0.4259 | 0.0873 | 0.0507 | 0.4442 | 0.0910 | 0.0677 | 0.5925 | 0.1214 | 0.0650 | 0.5694 | 0.1167 |
| No log | 2.9928 | 234 | 1.0155 | 0.7674 | 0.5123 | 0.6144 | 0.8789 | 0.5868 | 0.7037 | 0.8480 | 0.5661 | 0.6789 | 0.7862 | 0.5249 | 0.6295 | 0.3359 | 0.7345 | 0.4609 | 0.3375 | 0.7381 | 0.4632 | 0.3436 | 0.7514 | 0.4716 | 0.3421 | 0.7481 | 0.4695 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.2151 | 0.4188 | 0.2842 | 0.3189 | 0.6209 | 0.4214 | 0.2696 | 0.5248 | 0.3562 | 0.2165 | 0.4215 | 0.2861 | 0.3333 | 0.8765 | 0.4830 | 0.3407 | 0.8959 | 0.4936 | 0.3414 | 0.8978 | 0.4947 | 0.3370 | 0.8861 | 0.4883 | 0.3040 | 0.7788 | 0.4373 | 0.3142 | 0.8050 | 0.4520 | 0.3205 | 0.8211 | 0.4610 | 0.3140 | 0.8045 | 0.4517 | 0.2034 | 0.6683 | 0.3118 | 0.2671 | 0.8778 | 0.4096 | 0.2359 | 0.7753 | 0.3617 | 0.2039 | 0.6702 | 0.3127 | 0.2377 | 0.8479 | 0.3713 | 0.2493 | 0.8892 | 0.3894 | 0.2436 | 0.8688 | 0.3805 | 0.2378 | 0.8481 | 0.3714 | 0.1205 | 0.7599 | 0.2080 | 0.1344 | 0.8479 | 0.2321 | 0.1284 | 0.8096 | 0.2216 | 0.1211 | 0.7636 | 0.2090 | 0.1449 | 0.2900 | 0.1933 | 0.2720 | 0.5443 | 0.3627 | 0.2143 | 0.4289 | 0.2858 | 0.1461 | 0.2924 | 0.1949 | 0.0140 | 0.2625 | 0.0265 | 0.0193 | 0.3628 | 0.0367 | 0.0169 | 0.3172 | 0.0320 | 0.0141 | 0.2659 | 0.0269 | 0.2627 | 0.7363 | 0.3873 | 0.2945 | 0.8252 | 0.4340 | 0.2791 | 0.7823 | 0.4114 | 0.2631 | 0.7372 | 0.3877 | 0.0093 | 0.2999 | 0.0180 | 0.0107 | 0.3461 | 0.0208 | 0.0175 | 0.5639 | 0.0339 | 0.0157 | 0.5074 | 0.0305 | 0.1359 | 0.7105 | 0.2282 | 0.1533 | 0.8010 | 0.2573 | 0.1483 | 0.7749 | 0.2489 | 0.1389 | 0.7261 | 0.2332 | 0.2357 | 0.6103 | 0.3400 | 0.2553 | 0.6611 | 0.3684 | 0.2494 | 0.6458 | 0.3598 | 0.2390 | 0.6188 | 0.3448 | 0.2941 | 0.8469 | 0.4366 | 0.3120 | 0.8986 | 0.4632 | 0.3050 | 0.8785 | 0.4528 | 0.2959 | 0.8521 | 0.4393 | 0.3319 | 0.8709 | 0.4807 | 0.3369 | 0.8839 | 0.4879 | 0.3368 | 0.8836 | 0.4877 | 0.3340 | 0.8762 | 0.4836 | 0.0443 | 0.4726 | 0.0810 | 0.0532 | 0.5672 | 0.0972 | 0.0565 | 0.6026 | 0.1033 | 0.0511 | 0.5455 | 0.0935 | 0.2603 | 0.6064 | 0.3642 | 0.2785 | 0.6489 | 0.3897 | 0.2720 | 0.6336 | 0.3806 | 0.2623 | 0.6112 | 0.3671 | 0.0703 | 0.4753 | 0.1225 | 0.0715 | 0.4837 | 0.1246 | 0.0785 | 0.5308 | 0.1368 | 0.0771 | 0.5216 | 0.1344 | 0.2031 | 0.7591 | 0.3204 | 0.2050 | 0.7663 | 0.3235 | 0.2157 | 0.8062 | 0.3403 | 0.2140 | 0.8001 | 0.3377 | 0.1981 | 0.0829 | 0.1169 | 0.1994 | 0.0834 | 0.1176 | 0.2002 | 0.0838 | 0.1181 | 0.1988 | 0.0832 | 0.1173 | 0.3050 | 0.8819 | 0.4532 | 0.3139 | 0.9077 | 0.4665 | 0.3123 | 0.9032 | 0.4642 | 0.3076 | 0.8896 | 0.4572 | 0.2439 | 0.5352 | 0.3351 | 0.2692 | 0.5907 | 0.3699 | 0.2626 | 0.5761 | 0.3607 | 0.2481 | 0.5443 | 0.3408 | 0.2328 | 0.5136 | 0.3204 | 0.3080 | 0.6794 | 0.4239 | 0.2845 | 0.6277 | 0.3916 | 0.2419 | 0.5336 | 0.3329 | 0.1407 | 0.2150 | 0.1701 | 0.2620 | 0.4005 | 0.3168 | 0.2064 | 0.3155 | 0.2495 | 0.1416 | 0.2165 | 0.1712 | 0.1792 | 0.6182 | 0.2778 | 0.1933 | 0.6670 | 0.2998 | 0.2054 | 0.7088 | 0.3185 | 0.1970 | 0.6797 | 0.3054 | 0.2100 | 0.3857 | 0.2719 | 0.2970 | 0.5456 | 0.3847 | 0.2709 | 0.4976 | 0.3508 | 0.2162 | 0.3972 | 0.2800 | 0.0868 | 0.4859 | 0.1473 | 0.0921 | 0.5155 | 0.1562 | 0.1101 | 0.6164 | 0.1868 | 0.1054 | 0.5901 | 0.1788 | 0.0753 | 0.4218 | 0.1278 | 0.0876 | 0.4906 | 0.1486 | 0.0818 | 0.4580 | 0.1387 | 0.0755 | 0.4229 | 0.1281 | 0.1590 | 0.7255 | 0.2609 | 0.1929 | 0.8800 | 0.3164 | 0.1764 | 0.8048 | 0.2894 | 0.1593 | 0.7267 | 0.2613 | 0.2767 | 0.6805 | 0.3934 | 0.3025 | 0.7439 | 0.4301 | 0.3058 | 0.7522 | 0.4349 | 0.2894 | 0.7119 | 0.4115 | 0.1618 | 0.7324 | 0.2651 | 0.1650 | 0.7470 | 0.2703 | 0.1635 | 0.7400 | 0.2678 | 0.1619 | 0.7328 | 0.2652 | 0.0655 | 0.5579 | 0.1173 | 0.0701 | 0.5967 | 0.1254 | 0.0707 | 0.6019 | 0.1265 | 0.0681 | 0.5800 | 0.1219 | 0.0490 | 0.4019 | 0.0874 | 0.0513 | 0.4203 | 0.0914 | 0.0710 | 0.5819 | 0.1265 | 0.0678 | 0.5558 | 0.1209 |
| No log | 3.9904 | 312 | 1.0520 | 0.7556 | 0.4884 | 0.5933 | 0.8736 | 0.5647 | 0.6860 | 0.8460 | 0.5468 | 0.6643 | 0.7794 | 0.5038 | 0.6120 | 0.3412 | 0.6845 | 0.4554 | 0.3433 | 0.6885 | 0.4581 | 0.3517 | 0.7055 | 0.4694 | 0.3500 | 0.7021 | 0.4672 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.3411 | 0.8679 | 0.4897 | 0.3492 | 0.8885 | 0.5014 | 0.3498 | 0.8901 | 0.5023 | 0.3449 | 0.8774 | 0.4951 | 0.2154 | 0.4144 | 0.2835 | 0.3204 | 0.6164 | 0.4217 | 0.2705 | 0.5203 | 0.3559 | 0.2166 | 0.4166 | 0.2850 | 0.3067 | 0.7084 | 0.4281 | 0.3187 | 0.7360 | 0.4448 | 0.3310 | 0.7645 | 0.4620 | 0.3233 | 0.7467 | 0.4513 | 0.2068 | 0.6628 | 0.3153 | 0.2732 | 0.8755 | 0.4164 | 0.2407 | 0.7714 | 0.3669 | 0.2074 | 0.6647 | 0.3162 | 0.2430 | 0.8446 | 0.3774 | 0.2551 | 0.8866 | 0.3962 | 0.2491 | 0.8659 | 0.3869 | 0.2431 | 0.8449 | 0.3775 | 0.1218 | 0.7461 | 0.2094 | 0.1377 | 0.8437 | 0.2368 | 0.1308 | 0.8012 | 0.2248 | 0.1224 | 0.75 | 0.2105 | 0.1354 | 0.2571 | 0.1774 | 0.2732 | 0.5186 | 0.3578 | 0.2101 | 0.3988 | 0.2752 | 0.1366 | 0.2593 | 0.1789 | 0.0142 | 0.2451 | 0.0269 | 0.0203 | 0.3485 | 0.0383 | 0.0174 | 0.3001 | 0.0330 | 0.0144 | 0.2473 | 0.0272 | 0.2671 | 0.7241 | 0.3902 | 0.3009 | 0.8157 | 0.4396 | 0.2846 | 0.7716 | 0.4158 | 0.2674 | 0.7249 | 0.3907 | 0.0089 | 0.2903 | 0.0173 | 0.0105 | 0.3415 | 0.0203 | 0.0173 | 0.5646 | 0.0336 | 0.0154 | 0.5026 | 0.0299 | 0.1374 | 0.6921 | 0.2293 | 0.1570 | 0.7906 | 0.2620 | 0.1509 | 0.7597 | 0.2517 | 0.1405 | 0.7073 | 0.2344 | 0.2410 | 0.5974 | 0.3434 | 0.2619 | 0.6492 | 0.3732 | 0.2560 | 0.6346 | 0.3648 | 0.2448 | 0.6070 | 0.3489 | 0.2996 | 0.8407 | 0.4417 | 0.3180 | 0.8924 | 0.4689 | 0.3109 | 0.8724 | 0.4584 | 0.3015 | 0.8462 | 0.4446 | 0.3400 | 0.8645 | 0.4880 | 0.3449 | 0.8772 | 0.4952 | 0.3451 | 0.8776 | 0.4954 | 0.3423 | 0.8705 | 0.4914 | 0.0406 | 0.4697 | 0.0748 | 0.0496 | 0.5733 | 0.0913 | 0.0525 | 0.6066 | 0.0966 | 0.0475 | 0.5488 | 0.0874 | 0.2610 | 0.5803 | 0.3600 | 0.2808 | 0.6244 | 0.3874 | 0.2737 | 0.6085 | 0.3775 | 0.2632 | 0.5853 | 0.3632 | 0.0670 | 0.4389 | 0.1163 | 0.0678 | 0.4437 | 0.1176 | 0.0745 | 0.4880 | 0.1293 | 0.0732 | 0.4795 | 0.1270 | 0.2068 | 0.7020 | 0.3195 | 0.2090 | 0.7092 | 0.3228 | 0.2250 | 0.7636 | 0.3475 | 0.2227 | 0.7559 | 0.3440 | 0.2027 | 0.0838 | 0.1186 | 0.2039 | 0.0843 | 0.1193 | 0.2049 | 0.0847 | 0.1199 | 0.2035 | 0.0841 | 0.1190 | 0.3106 | 0.8701 | 0.4578 | 0.3209 | 0.8989 | 0.4729 | 0.3187 | 0.8929 | 0.4698 | 0.3134 | 0.8780 | 0.4619 | 0.2426 | 0.5004 | 0.3268 | 0.2712 | 0.5593 | 0.3652 | 0.2632 | 0.5429 | 0.3545 | 0.2468 | 0.5090 | 0.3324 | 0.2337 | 0.4529 | 0.3083 | 0.3115 | 0.6037 | 0.4109 | 0.2951 | 0.5719 | 0.3893 | 0.2480 | 0.4807 | 0.3272 | 0.1306 | 0.1800 | 0.1514 | 0.2711 | 0.3736 | 0.3142 | 0.2068 | 0.2851 | 0.2397 | 0.1318 | 0.1817 | 0.1528 | 0.1706 | 0.6122 | 0.2669 | 0.1869 | 0.6705 | 0.2923 | 0.1970 | 0.7069 | 0.3082 | 0.1878 | 0.6737 | 0.2937 | 0.2028 | 0.3185 | 0.2478 | 0.3002 | 0.4716 | 0.3669 | 0.2760 | 0.4335 | 0.3373 | 0.2118 | 0.3326 | 0.2588 | 0.0893 | 0.3936 | 0.1456 | 0.0951 | 0.4193 | 0.1551 | 0.1303 | 0.5742 | 0.2124 | 0.1226 | 0.5403 | 0.1999 | 0.0777 | 0.4137 | 0.1308 | 0.0917 | 0.4885 | 0.1545 | 0.0850 | 0.4528 | 0.1432 | 0.0779 | 0.4146 | 0.1311 | 0.1629 | 0.7269 | 0.2661 | 0.1979 | 0.8834 | 0.3234 | 0.1809 | 0.8073 | 0.2955 | 0.1632 | 0.7284 | 0.2666 | 0.2794 | 0.6119 | 0.3836 | 0.3064 | 0.6710 | 0.4207 | 0.3162 | 0.6925 | 0.4341 | 0.2977 | 0.6519 | 0.4087 | 0.1657 | 0.7269 | 0.2699 | 0.1689 | 0.7410 | 0.2751 | 0.1674 | 0.7344 | 0.2727 | 0.1658 | 0.7273 | 0.2701 | 0.0640 | 0.5702 | 0.1151 | 0.0694 | 0.6185 | 0.1248 | 0.0693 | 0.6179 | 0.1247 | 0.0664 | 0.5917 | 0.1194 | 0.0503 | 0.3773 | 0.0888 | 0.0528 | 0.3960 | 0.0932 | 0.0750 | 0.5621 | 0.1323 | 0.0716 | 0.5366 | 0.1263 |
| No log | 4.9880 | 390 | 1.0641 | 0.7486 | 0.4779 | 0.5834 | 0.8701 | 0.5555 | 0.6781 | 0.8438 | 0.5387 | 0.6576 | 0.7745 | 0.4944 | 0.6035 | 0.3421 | 0.6722 | 0.4535 | 0.3445 | 0.6767 | 0.4566 | 0.3530 | 0.6936 | 0.4679 | 0.3512 | 0.6899 | 0.4654 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.3434 | 0.8643 | 0.4916 | 0.3517 | 0.8850 | 0.5034 | 0.3522 | 0.8863 | 0.5041 | 0.3472 | 0.8736 | 0.4969 | 0.2148 | 0.4175 | 0.2836 | 0.3184 | 0.6188 | 0.4205 | 0.2693 | 0.5233 | 0.3556 | 0.2158 | 0.4195 | 0.2850 | 0.3065 | 0.6844 | 0.4234 | 0.3193 | 0.7131 | 0.4411 | 0.3335 | 0.7448 | 0.4607 | 0.3252 | 0.7263 | 0.4493 | 0.2069 | 0.6567 | 0.3147 | 0.2746 | 0.8716 | 0.4177 | 0.2414 | 0.7663 | 0.3672 | 0.2075 | 0.6586 | 0.3156 | 0.2443 | 0.8432 | 0.3788 | 0.2566 | 0.8859 | 0.3980 | 0.2505 | 0.8648 | 0.3885 | 0.2444 | 0.8435 | 0.3789 | 0.1222 | 0.7371 | 0.2097 | 0.1386 | 0.8356 | 0.2377 | 0.1315 | 0.7931 | 0.2256 | 0.1229 | 0.7413 | 0.2109 | 0.1315 | 0.2472 | 0.1717 | 0.2726 | 0.5123 | 0.3558 | 0.2081 | 0.3912 | 0.2717 | 0.1332 | 0.2503 | 0.1739 | 0.0144 | 0.2438 | 0.0272 | 0.0205 | 0.3464 | 0.0386 | 0.0176 | 0.2983 | 0.0333 | 0.0145 | 0.2460 | 0.0274 | 0.2665 | 0.7118 | 0.3878 | 0.3024 | 0.8077 | 0.4400 | 0.2851 | 0.7616 | 0.4149 | 0.2667 | 0.7125 | 0.3882 | 0.0087 | 0.2724 | 0.0168 | 0.0107 | 0.3340 | 0.0207 | 0.0178 | 0.5577 | 0.0345 | 0.0154 | 0.4831 | 0.0299 | 0.1366 | 0.6782 | 0.2274 | 0.1578 | 0.7837 | 0.2627 | 0.1511 | 0.7504 | 0.2516 | 0.1399 | 0.6948 | 0.2329 | 0.2422 | 0.5961 | 0.3445 | 0.2633 | 0.6479 | 0.3744 | 0.2577 | 0.6341 | 0.3664 | 0.2465 | 0.6065 | 0.3505 | 0.3007 | 0.8336 | 0.4420 | 0.3195 | 0.8856 | 0.4696 | 0.3125 | 0.8662 | 0.4593 | 0.3027 | 0.8392 | 0.4450 | 0.3419 | 0.8636 | 0.4898 | 0.3473 | 0.8772 | 0.4976 | 0.3473 | 0.8773 | 0.4976 | 0.3443 | 0.8696 | 0.4933 | 0.0375 | 0.4518 | 0.0692 | 0.0464 | 0.5593 | 0.0857 | 0.0485 | 0.5846 | 0.0895 | 0.0434 | 0.5227 | 0.0801 | 0.2610 | 0.5704 | 0.3581 | 0.2817 | 0.6156 | 0.3865 | 0.2741 | 0.5990 | 0.3761 | 0.2632 | 0.5752 | 0.3611 | 0.0647 | 0.4276 | 0.1124 | 0.0661 | 0.4373 | 0.1149 | 0.0723 | 0.4778 | 0.1256 | 0.0707 | 0.4672 | 0.1228 | 0.2062 | 0.6773 | 0.3161 | 0.2089 | 0.6863 | 0.3203 | 0.2263 | 0.7435 | 0.3470 | 0.2236 | 0.7345 | 0.3428 | 0.2033 | 0.0842 | 0.1191 | 0.2045 | 0.0847 | 0.1198 | 0.2055 | 0.0851 | 0.1203 | 0.2040 | 0.0845 | 0.1195 | 0.3121 | 0.8663 | 0.4589 | 0.3226 | 0.8954 | 0.4744 | 0.3204 | 0.8893 | 0.4711 | 0.3149 | 0.8740 | 0.4630 | 0.2402 | 0.4855 | 0.3214 | 0.2712 | 0.5481 | 0.3629 | 0.2620 | 0.5296 | 0.3506 | 0.2443 | 0.4937 | 0.3268 | 0.2326 | 0.4292 | 0.3017 | 0.3116 | 0.5750 | 0.4042 | 0.2982 | 0.5501 | 0.3867 | 0.2489 | 0.4592 | 0.3228 | 0.1250 | 0.1657 | 0.1425 | 0.2740 | 0.3634 | 0.3124 | 0.2056 | 0.2727 | 0.2345 | 0.1263 | 0.1675 | 0.1440 | 0.1617 | 0.6022 | 0.2549 | 0.1792 | 0.6674 | 0.2825 | 0.1884 | 0.7017 | 0.2971 | 0.1786 | 0.6651 | 0.2815 | 0.1969 | 0.2903 | 0.2346 | 0.2999 | 0.4421 | 0.3574 | 0.2769 | 0.4082 | 0.3300 | 0.2070 | 0.3051 | 0.2467 | 0.0893 | 0.3529 | 0.1425 | 0.0950 | 0.3757 | 0.1517 | 0.1419 | 0.5610 | 0.2265 | 0.1325 | 0.5239 | 0.2115 | 0.0784 | 0.4117 | 0.1318 | 0.0926 | 0.4863 | 0.1556 | 0.0859 | 0.4507 | 0.1442 | 0.0786 | 0.4126 | 0.1320 | 0.1636 | 0.7241 | 0.2669 | 0.1992 | 0.8819 | 0.3250 | 0.1820 | 0.8058 | 0.2970 | 0.1641 | 0.7264 | 0.2677 | 0.2781 | 0.5800 | 0.3760 | 0.3068 | 0.6398 | 0.4147 | 0.3187 | 0.6646 | 0.4308 | 0.2983 | 0.6221 | 0.4033 | 0.1664 | 0.7238 | 0.2706 | 0.1697 | 0.7382 | 0.2760 | 0.1682 | 0.7314 | 0.2735 | 0.1665 | 0.7241 | 0.2708 | 0.0619 | 0.5614 | 0.1115 | 0.0679 | 0.6157 | 0.1223 | 0.0674 | 0.6115 | 0.1214 | 0.0642 | 0.5827 | 0.1157 | 0.0509 | 0.3612 | 0.0892 | 0.0533 | 0.3784 | 0.0934 | 0.0758 | 0.5381 | 0.1328 | 0.0722 | 0.5128 | 0.1266 |
| No log | 5.9984 | 469 | 1.0707 | 0.7449 | 0.4717 | 0.5776 | 0.8682 | 0.5498 | 0.6733 | 0.8428 | 0.5336 | 0.6535 | 0.7719 | 0.4888 | 0.5985 | 0.3427 | 0.6668 | 0.4527 | 0.3453 | 0.6719 | 0.4561 | 0.3537 | 0.6883 | 0.4673 | 0.3518 | 0.6845 | 0.4647 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.3444 | 0.8622 | 0.4922 | 0.3530 | 0.8839 | 0.5045 | 0.3533 | 0.8847 | 0.5050 | 0.3481 | 0.8716 | 0.4975 | 0.2141 | 0.4190 | 0.2834 | 0.3174 | 0.6210 | 0.4201 | 0.2683 | 0.5250 | 0.3551 | 0.2150 | 0.4207 | 0.2846 | 0.3067 | 0.6693 | 0.4207 | 0.3196 | 0.6974 | 0.4383 | 0.3350 | 0.7310 | 0.4595 | 0.3264 | 0.7124 | 0.4477 | 0.2068 | 0.6519 | 0.3140 | 0.2757 | 0.8690 | 0.4186 | 0.2419 | 0.7626 | 0.3673 | 0.2074 | 0.6538 | 0.3149 | 0.2449 | 0.8404 | 0.3792 | 0.2575 | 0.8837 | 0.3988 | 0.2513 | 0.8623 | 0.3892 | 0.2449 | 0.8406 | 0.3794 | 0.1223 | 0.7318 | 0.2096 | 0.1393 | 0.8331 | 0.2386 | 0.1320 | 0.7897 | 0.2262 | 0.1231 | 0.7365 | 0.2109 | 0.1297 | 0.2407 | 0.1685 | 0.2725 | 0.5058 | 0.3541 | 0.2073 | 0.3849 | 0.2695 | 0.1312 | 0.2436 | 0.1706 | 0.0145 | 0.2421 | 0.0273 | 0.0205 | 0.3435 | 0.0387 | 0.0177 | 0.2963 | 0.0334 | 0.0146 | 0.2443 | 0.0275 | 0.2669 | 0.7065 | 0.3874 | 0.3033 | 0.8029 | 0.4403 | 0.2858 | 0.7566 | 0.4149 | 0.2672 | 0.7073 | 0.3879 | 0.0088 | 0.2692 | 0.0171 | 0.0106 | 0.3226 | 0.0205 | 0.0182 | 0.5544 | 0.0352 | 0.0157 | 0.4791 | 0.0304 | 0.1367 | 0.6691 | 0.2270 | 0.1584 | 0.7752 | 0.2630 | 0.1517 | 0.7425 | 0.2519 | 0.1402 | 0.6864 | 0.2329 | 0.2428 | 0.5944 | 0.3447 | 0.2642 | 0.6469 | 0.3751 | 0.2585 | 0.6331 | 0.3672 | 0.2471 | 0.6050 | 0.3509 | 0.3011 | 0.8286 | 0.4417 | 0.3203 | 0.8814 | 0.4699 | 0.3134 | 0.8624 | 0.4597 | 0.3034 | 0.8350 | 0.4451 | 0.3431 | 0.8627 | 0.4909 | 0.3485 | 0.8764 | 0.4987 | 0.3486 | 0.8765 | 0.4988 | 0.3455 | 0.8688 | 0.4944 | 0.0365 | 0.4569 | 0.0675 | 0.0457 | 0.5724 | 0.0846 | 0.0473 | 0.5923 | 0.0876 | 0.0420 | 0.5257 | 0.0777 | 0.2607 | 0.5618 | 0.3561 | 0.2813 | 0.6063 | 0.3843 | 0.2739 | 0.5902 | 0.3741 | 0.2629 | 0.5665 | 0.3591 | 0.0638 | 0.4253 | 0.1110 | 0.0655 | 0.4362 | 0.1138 | 0.0707 | 0.4710 | 0.1230 | 0.0691 | 0.4605 | 0.1202 | 0.2057 | 0.6671 | 0.3145 | 0.2086 | 0.6766 | 0.3189 | 0.2269 | 0.7359 | 0.3469 | 0.2241 | 0.7268 | 0.3426 | 0.2038 | 0.0844 | 0.1193 | 0.2049 | 0.0848 | 0.1200 | 0.2059 | 0.0852 | 0.1206 | 0.2045 | 0.0847 | 0.1197 | 0.3129 | 0.8621 | 0.4591 | 0.3237 | 0.8918 | 0.4750 | 0.3214 | 0.8855 | 0.4716 | 0.3157 | 0.8699 | 0.4633 | 0.2404 | 0.4767 | 0.3196 | 0.2715 | 0.5383 | 0.3609 | 0.2622 | 0.5200 | 0.3486 | 0.2444 | 0.4846 | 0.3249 | 0.2321 | 0.4144 | 0.2976 | 0.3116 | 0.5562 | 0.3994 | 0.3001 | 0.5357 | 0.3847 | 0.2495 | 0.4455 | 0.3199 | 0.1211 | 0.1568 | 0.1367 | 0.2754 | 0.3567 | 0.3108 | 0.2045 | 0.2648 | 0.2308 | 0.1225 | 0.1586 | 0.1382 | 0.1574 | 0.5969 | 0.2492 | 0.1758 | 0.6664 | 0.2782 | 0.1845 | 0.6996 | 0.2920 | 0.1740 | 0.6597 | 0.2754 | 0.1935 | 0.2756 | 0.2273 | 0.2997 | 0.4268 | 0.3521 | 0.2775 | 0.3952 | 0.3261 | 0.2046 | 0.2915 | 0.2405 | 0.0888 | 0.3351 | 0.1404 | 0.0948 | 0.3580 | 0.1500 | 0.1467 | 0.5538 | 0.2320 | 0.1365 | 0.5152 | 0.2158 | 0.0790 | 0.4120 | 0.1326 | 0.0933 | 0.4865 | 0.1566 | 0.0864 | 0.4507 | 0.1450 | 0.0791 | 0.4127 | 0.1328 | 0.1640 | 0.7224 | 0.2673 | 0.2001 | 0.8815 | 0.3262 | 0.1827 | 0.8049 | 0.2978 | 0.1645 | 0.7248 | 0.2682 | 0.2777 | 0.5631 | 0.3720 | 0.3071 | 0.6228 | 0.4114 | 0.3208 | 0.6504 | 0.4296 | 0.2993 | 0.6069 | 0.4009 | 0.1667 | 0.7217 | 0.2709 | 0.1701 | 0.7364 | 0.2764 | 0.1685 | 0.7295 | 0.2738 | 0.1668 | 0.7221 | 0.2710 | 0.0612 | 0.5596 | 0.1104 | 0.0669 | 0.6117 | 0.1206 | 0.0665 | 0.6082 | 0.1199 | 0.0635 | 0.5807 | 0.1145 | 0.0512 | 0.3586 | 0.0897 | 0.0538 | 0.3764 | 0.0941 | 0.0765 | 0.5357 | 0.1339 | 0.0729 | 0.5103 | 0.1276 |
| 1.1957 | 6.9960 | 547 | 1.0735 | 0.7429 | 0.4687 | 0.5747 | 0.8672 | 0.5471 | 0.6709 | 0.8419 | 0.5311 | 0.6514 | 0.7702 | 0.4859 | 0.5958 | 0.3426 | 0.6660 | 0.4525 | 0.3453 | 0.6712 | 0.4560 | 0.3537 | 0.6875 | 0.4671 | 0.3517 | 0.6835 | 0.4644 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.3445 | 0.8606 | 0.4921 | 0.3534 | 0.8827 | 0.5047 | 0.3535 | 0.8831 | 0.5049 | 0.3482 | 0.8697 | 0.4973 | 0.2132 | 0.4186 | 0.2825 | 0.3160 | 0.6204 | 0.4187 | 0.2672 | 0.5245 | 0.3540 | 0.2140 | 0.4201 | 0.2835 | 0.3065 | 0.6639 | 0.4194 | 0.3195 | 0.6922 | 0.4372 | 0.3350 | 0.7257 | 0.4584 | 0.3264 | 0.7070 | 0.4466 | 0.2066 | 0.6497 | 0.3135 | 0.2759 | 0.8677 | 0.4186 | 0.2419 | 0.7609 | 0.3671 | 0.2072 | 0.6517 | 0.3144 | 0.2448 | 0.8393 | 0.3791 | 0.2575 | 0.8828 | 0.3988 | 0.2513 | 0.8613 | 0.3890 | 0.2449 | 0.8395 | 0.3792 | 0.1220 | 0.7267 | 0.2089 | 0.1394 | 0.8304 | 0.2388 | 0.1320 | 0.7861 | 0.2260 | 0.1228 | 0.7315 | 0.2103 | 0.1289 | 0.2390 | 0.1675 | 0.2726 | 0.5053 | 0.3542 | 0.2071 | 0.3838 | 0.2690 | 0.1305 | 0.2419 | 0.1695 | 0.0146 | 0.2445 | 0.0275 | 0.0205 | 0.3446 | 0.0388 | 0.0178 | 0.2983 | 0.0335 | 0.0147 | 0.2472 | 0.0278 | 0.2668 | 0.7051 | 0.3871 | 0.3034 | 0.8019 | 0.4402 | 0.2858 | 0.7553 | 0.4146 | 0.2670 | 0.7059 | 0.3875 | 0.0089 | 0.2705 | 0.0173 | 0.0106 | 0.3208 | 0.0205 | 0.0184 | 0.5575 | 0.0356 | 0.0159 | 0.4821 | 0.0308 | 0.1366 | 0.6629 | 0.2265 | 0.1584 | 0.7688 | 0.2627 | 0.1519 | 0.7372 | 0.2519 | 0.1403 | 0.6810 | 0.2327 | 0.2427 | 0.5941 | 0.3446 | 0.2642 | 0.6466 | 0.3751 | 0.2585 | 0.6327 | 0.3670 | 0.2470 | 0.6045 | 0.3507 | 0.3012 | 0.8273 | 0.4416 | 0.3203 | 0.8798 | 0.4696 | 0.3136 | 0.8613 | 0.4598 | 0.3036 | 0.8340 | 0.4452 | 0.3430 | 0.8612 | 0.4906 | 0.3487 | 0.8755 | 0.4988 | 0.3486 | 0.8753 | 0.4987 | 0.3455 | 0.8673 | 0.4941 | 0.0352 | 0.4455 | 0.0652 | 0.0453 | 0.5737 | 0.0840 | 0.0464 | 0.5873 | 0.0860 | 0.0405 | 0.5131 | 0.0751 | 0.2602 | 0.5577 | 0.3548 | 0.2814 | 0.6032 | 0.3837 | 0.2736 | 0.5866 | 0.3732 | 0.2624 | 0.5624 | 0.3578 | 0.0635 | 0.4181 | 0.1102 | 0.0653 | 0.4299 | 0.1133 | 0.0702 | 0.4625 | 0.1219 | 0.0685 | 0.4511 | 0.1189 | 0.2051 | 0.6620 | 0.3131 | 0.2081 | 0.6720 | 0.3178 | 0.2264 | 0.7308 | 0.3457 | 0.2235 | 0.7215 | 0.3413 | 0.2037 | 0.0845 | 0.1194 | 0.2047 | 0.0849 | 0.1200 | 0.2057 | 0.0853 | 0.1206 | 0.2044 | 0.0847 | 0.1198 | 0.3130 | 0.8610 | 0.4591 | 0.3238 | 0.8907 | 0.4750 | 0.3215 | 0.8844 | 0.4716 | 0.3159 | 0.8689 | 0.4633 | 0.2403 | 0.4711 | 0.3182 | 0.2716 | 0.5326 | 0.3598 | 0.2622 | 0.5141 | 0.3473 | 0.2442 | 0.4789 | 0.3235 | 0.2319 | 0.4085 | 0.2958 | 0.3113 | 0.5485 | 0.3972 | 0.3007 | 0.5298 | 0.3836 | 0.2496 | 0.4398 | 0.3184 | 0.1188 | 0.1524 | 0.1335 | 0.2758 | 0.3538 | 0.3100 | 0.2035 | 0.2610 | 0.2287 | 0.1201 | 0.1541 | 0.1350 | 0.1553 | 0.5926 | 0.2461 | 0.1727 | 0.6591 | 0.2737 | 0.1819 | 0.6940 | 0.2882 | 0.1717 | 0.6553 | 0.2721 | 0.1915 | 0.2688 | 0.2236 | 0.2989 | 0.4195 | 0.3491 | 0.2776 | 0.3897 | 0.3243 | 0.2029 | 0.2848 | 0.2370 | 0.0882 | 0.3259 | 0.1388 | 0.0946 | 0.3494 | 0.1488 | 0.1497 | 0.5532 | 0.2356 | 0.1387 | 0.5126 | 0.2184 | 0.0793 | 0.4137 | 0.1331 | 0.0932 | 0.4864 | 0.1565 | 0.0865 | 0.4515 | 0.1452 | 0.0794 | 0.4145 | 0.1333 | 0.1641 | 0.7219 | 0.2674 | 0.2002 | 0.8810 | 0.3263 | 0.1829 | 0.8045 | 0.2980 | 0.1646 | 0.7243 | 0.2683 | 0.2769 | 0.5548 | 0.3694 | 0.3071 | 0.6153 | 0.4097 | 0.3205 | 0.6420 | 0.4275 | 0.2984 | 0.5978 | 0.3981 | 0.1666 | 0.7200 | 0.2706 | 0.1701 | 0.7348 | 0.2762 | 0.1685 | 0.7279 | 0.2736 | 0.1667 | 0.7203 | 0.2708 | 0.0605 | 0.5548 | 0.1091 | 0.0664 | 0.6090 | 0.1197 | 0.0659 | 0.6044 | 0.1188 | 0.0628 | 0.5759 | 0.1132 | 0.0514 | 0.3560 | 0.0899 | 0.0538 | 0.3722 | 0.0940 | 0.0765 | 0.5296 | 0.1337 | 0.0728 | 0.5035 | 0.1272 |
| 1.1957 | 7.9808 | 624 | 1.0754 | 0.7420 | 0.4672 | 0.5734 | 0.8666 | 0.5457 | 0.6697 | 0.8418 | 0.5301 | 0.6505 | 0.7698 | 0.4848 | 0.5949 | 0.3428 | 0.6648 | 0.4523 | 0.3454 | 0.6699 | 0.4558 | 0.3538 | 0.6862 | 0.4669 | 0.3518 | 0.6823 | 0.4643 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.0 | 0.0 | 0 | 0.3447 | 0.8594 | 0.4920 | 0.3535 | 0.8815 | 0.5047 | 0.3537 | 0.8819 | 0.5049 | 0.3484 | 0.8686 | 0.4973 | 0.2130 | 0.4185 | 0.2823 | 0.3157 | 0.6203 | 0.4185 | 0.2672 | 0.5249 | 0.3541 | 0.2142 | 0.4208 | 0.2839 | 0.3063 | 0.6593 | 0.4183 | 0.3193 | 0.6872 | 0.4361 | 0.3356 | 0.7222 | 0.4583 | 0.3270 | 0.7037 | 0.4465 | 0.2063 | 0.6482 | 0.3130 | 0.2760 | 0.8670 | 0.4187 | 0.2418 | 0.7598 | 0.3669 | 0.2070 | 0.6502 | 0.3140 | 0.2448 | 0.8390 | 0.3791 | 0.2576 | 0.8827 | 0.3988 | 0.2513 | 0.8612 | 0.3891 | 0.2449 | 0.8392 | 0.3792 | 0.1221 | 0.7256 | 0.2090 | 0.1394 | 0.8289 | 0.2387 | 0.1320 | 0.7849 | 0.2261 | 0.1229 | 0.7306 | 0.2104 | 0.1285 | 0.2384 | 0.1670 | 0.2721 | 0.5048 | 0.3536 | 0.2065 | 0.3832 | 0.2684 | 0.1301 | 0.2413 | 0.1690 | 0.0145 | 0.2431 | 0.0274 | 0.0206 | 0.3451 | 0.0389 | 0.0178 | 0.2978 | 0.0336 | 0.0147 | 0.2457 | 0.0277 | 0.2667 | 0.7044 | 0.3869 | 0.3034 | 0.8015 | 0.4402 | 0.2858 | 0.7548 | 0.4146 | 0.2670 | 0.7052 | 0.3873 | 0.0090 | 0.2725 | 0.0174 | 0.0106 | 0.3217 | 0.0206 | 0.0186 | 0.5638 | 0.0360 | 0.0161 | 0.4889 | 0.0312 | 0.1364 | 0.6612 | 0.2261 | 0.1586 | 0.7688 | 0.2629 | 0.1519 | 0.7364 | 0.2518 | 0.1401 | 0.6795 | 0.2324 | 0.2428 | 0.5930 | 0.3445 | 0.2642 | 0.6454 | 0.3749 | 0.2587 | 0.6318 | 0.3671 | 0.2472 | 0.6038 | 0.3508 | 0.3011 | 0.8259 | 0.4414 | 0.3204 | 0.8788 | 0.4696 | 0.3136 | 0.8600 | 0.4596 | 0.3036 | 0.8325 | 0.4449 | 0.3432 | 0.8614 | 0.4908 | 0.3489 | 0.8758 | 0.4990 | 0.3488 | 0.8756 | 0.4989 | 0.3456 | 0.8675 | 0.4943 | 0.0349 | 0.4464 | 0.0647 | 0.0447 | 0.5718 | 0.0829 | 0.0457 | 0.5852 | 0.0848 | 0.0400 | 0.5117 | 0.0742 | 0.2598 | 0.5574 | 0.3544 | 0.2811 | 0.6030 | 0.3834 | 0.2733 | 0.5863 | 0.3728 | 0.2619 | 0.5620 | 0.3573 | 0.0632 | 0.4168 | 0.1098 | 0.0650 | 0.4290 | 0.1130 | 0.0700 | 0.4617 | 0.1216 | 0.0683 | 0.4501 | 0.1185 | 0.2047 | 0.6578 | 0.3122 | 0.2079 | 0.6681 | 0.3172 | 0.2264 | 0.7273 | 0.3453 | 0.2235 | 0.7180 | 0.3409 | 0.2034 | 0.0843 | 0.1192 | 0.2045 | 0.0848 | 0.1199 | 0.2055 | 0.0852 | 0.1205 | 0.2041 | 0.0846 | 0.1196 | 0.3131 | 0.8613 | 0.4593 | 0.3240 | 0.8913 | 0.4753 | 0.3217 | 0.8848 | 0.4718 | 0.3160 | 0.8691 | 0.4635 | 0.2398 | 0.4699 | 0.3175 | 0.2714 | 0.5318 | 0.3594 | 0.2618 | 0.5130 | 0.3466 | 0.2437 | 0.4775 | 0.3227 | 0.2317 | 0.4040 | 0.2945 | 0.3111 | 0.5425 | 0.3954 | 0.3012 | 0.5253 | 0.3829 | 0.2498 | 0.4356 | 0.3175 | 0.1183 | 0.1515 | 0.1328 | 0.2759 | 0.3533 | 0.3099 | 0.2031 | 0.2600 | 0.2281 | 0.1197 | 0.1532 | 0.1344 | 0.1536 | 0.5915 | 0.2438 | 0.1715 | 0.6605 | 0.2723 | 0.1803 | 0.6944 | 0.2863 | 0.1700 | 0.6547 | 0.2699 | 0.1909 | 0.2664 | 0.2224 | 0.2989 | 0.4172 | 0.3483 | 0.2780 | 0.3879 | 0.3239 | 0.2026 | 0.2828 | 0.2361 | 0.0881 | 0.3209 | 0.1382 | 0.0944 | 0.3440 | 0.1482 | 0.1513 | 0.5511 | 0.2374 | 0.1403 | 0.5113 | 0.2202 | 0.0792 | 0.4133 | 0.1330 | 0.0932 | 0.4865 | 0.1565 | 0.0865 | 0.4513 | 0.1452 | 0.0794 | 0.4141 | 0.1332 | 0.1641 | 0.7207 | 0.2674 | 0.2003 | 0.8796 | 0.3263 | 0.1830 | 0.8035 | 0.2981 | 0.1647 | 0.7234 | 0.2684 | 0.2766 | 0.5484 | 0.3677 | 0.3071 | 0.6089 | 0.4083 | 0.3212 | 0.6368 | 0.4270 | 0.2989 | 0.5928 | 0.3975 | 0.1668 | 0.7204 | 0.2709 | 0.1701 | 0.7349 | 0.2763 | 0.1686 | 0.7280 | 0.2737 | 0.1669 | 0.7207 | 0.2710 | 0.0603 | 0.5545 | 0.1088 | 0.0662 | 0.6080 | 0.1193 | 0.0657 | 0.6038 | 0.1185 | 0.0626 | 0.5757 | 0.1130 | 0.0515 | 0.3535 | 0.0899 | 0.0538 | 0.3691 | 0.0939 | 0.0767 | 0.5263 | 0.1339 | 0.0728 | 0.4994 | 0.1270 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.1
- Datasets 3.2.0
- Tokenizers 0.20.3
|
tarundachepally/granite_8b_5 | tarundachepally | 2025-05-02T12:04:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:ibm-granite/granite-8b-code-instruct-128k",
"base_model:finetune:ibm-granite/granite-8b-code-instruct-128k",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T12:04:43Z | ---
base_model: ibm-granite/granite-8b-code-instruct-128k
library_name: transformers
model_name: granite_8b_5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for granite_8b_5
This model is a fine-tuned version of [ibm-granite/granite-8b-code-instruct-128k](https://huggingface.co/ibm-granite/granite-8b-code-instruct-128k).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tarundachepally/granite_8b_5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kblz/mms-tts-amh-v2 | kblz | 2025-05-02T12:00:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T11:59:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
showhandshowhand/task-8-microsoft-Phi-3.5-mini-instruct | showhandshowhand | 2025-05-02T11:59:36Z | 322 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-05-01T04:41:48Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
luckycanucky/hnd_model_16b2-Q8_0-GGUF | luckycanucky | 2025-05-02T11:53:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:luckycanucky/hnd_model_16b2",
"base_model:quantized:luckycanucky/hnd_model_16b2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T11:53:28Z | ---
base_model: luckycanucky/hnd_model_16b2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# luckycanucky/hnd_model_16b2-Q8_0-GGUF
This model was converted to GGUF format from [`luckycanucky/hnd_model_16b2`](https://huggingface.co/luckycanucky/hnd_model_16b2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/luckycanucky/hnd_model_16b2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo luckycanucky/hnd_model_16b2-Q8_0-GGUF --hf-file hnd_model_16b2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo luckycanucky/hnd_model_16b2-Q8_0-GGUF --hf-file hnd_model_16b2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo luckycanucky/hnd_model_16b2-Q8_0-GGUF --hf-file hnd_model_16b2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo luckycanucky/hnd_model_16b2-Q8_0-GGUF --hf-file hnd_model_16b2-q8_0.gguf -c 2048
```
|
seekerdeep/task-8-microsoft-Phi-3.5-mini-instruct | seekerdeep | 2025-05-02T11:53:22Z | 358 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-04-13T05:07:28Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
ihsat/DzongkhaASR | ihsat | 2025-05-02T11:52:53Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T11:48:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_unlearned_LoRa_ACSEmployment_2_ep6_22 | MinaMila | 2025-05-02T11:52:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T11:52:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luckycanucky/hnd_model_16b2 | luckycanucky | 2025-05-02T11:51:34Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:48:36Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** luckycanucky
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yusuke111/myBit-Llama2-jp-127M-2B4TLike-2024 | yusuke111 | 2025-05-02T11:50:25Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bit_llama",
"text-generation",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-04-30T15:05:17Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-2B4TLike
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-2B4TLike
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0024
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.9846 | 0.0587 | 500 | 5.1982 |
| 3.7747 | 0.1175 | 1000 | 4.4941 |
| 3.5109 | 0.1762 | 1500 | 4.0737 |
| 3.3568 | 0.2350 | 2000 | 3.8909 |
| 3.276 | 0.2937 | 2500 | 3.7147 |
| 3.2203 | 0.3525 | 3000 | 3.5468 |
| 3.1626 | 0.4112 | 3500 | 3.4098 |
| 3.1272 | 0.4700 | 4000 | 3.3188 |
| 3.0925 | 0.5287 | 4500 | 3.2339 |
| 3.0693 | 0.5874 | 5000 | 3.1539 |
| 3.0412 | 0.6462 | 5500 | 3.0721 |
| 2.9981 | 0.7049 | 6000 | 3.0009 |
| 2.9881 | 0.7637 | 6500 | 2.9514 |
| 2.9871 | 0.8224 | 7000 | 2.9162 |
| 2.9796 | 0.8812 | 7500 | 2.8879 |
| 2.9914 | 0.9399 | 8000 | 2.8849 |
| 2.9649 | 0.9987 | 8500 | 2.8431 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Clip-Full-18-Valerie-Kay-Video-Origina/Video.btswiki.com.paro.aarti.viral.video.link.original.twitter | Clip-Full-18-Valerie-Kay-Video-Origina | 2025-05-02T11:50:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T11:48:32Z | Watch 🟢 ➤ ➤ ➤ <a href="https://sulkurl.com/pg2"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="https://sulkurl.com/pg2"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ |
banghyunmin/thermal-people-yolov11n | banghyunmin | 2025-05-02T11:49:45Z | 0 | 0 | null | [
"object-detection",
"yolo",
"thermal",
"person-detection",
"dataset:roboflow/people-detection-thermal",
"license:cc-by-4.0",
"region:us"
] | object-detection | 2025-05-02T11:33:29Z | ---
tags:
- object-detection
- yolo
- thermal
- person-detection
license: cc-by-4.0
datasets:
- roboflow/people-detection-thermal
model_name: Thermal YOLO V11n
library: ultralytics
---
# 🏆 Thermal YOLO V11n - 사람 탐지 모델
## 📌 개요
이 모델은 **YOLO 기반의 열화상 사람 탐지 모델**로, 다양한 환경에서 **정확한 탐지를 수행**할 수 있도록 최적화되었습니다.
**신뢰도(conf) 및 IoU 조정**, **불필요한 탐지 제거**, **False Positives 최소화**를 목표로 설계되었습니다.
- **모델명:** Thermal YOLO V11n (`best.pt`)
- **제작자:** [banghyunmin](https://huggingface.co/banghyunmin)
- **저장소:** [Hugging Face 모델 페이지](https://huggingface.co/banghyunmin/thermal-people-yolov11n)
---
## 📂 데이터셋 정보
이 모델은 **Roboflow에서 제공하는 "People Detection - Thermal" 데이터셋**을 활용하여 학습되었습니다.
🔹 **데이터셋 출처:** [Roboflow People Detection - Thermal](https://universe.roboflow.com/roboflow-universe-projects/people-detection-thermal)
🔹 **데이터셋 크기:** 15,303개의 열화상 이미지
🔹 **라이선스:** CC BY 4.0 (자유롭게 활용 가능)
🔹 **클래스:** `person` (사람 객체만 탐지)
|
Sertriz/DeepSeek-R1-TH | Sertriz | 2025-05-02T11:49:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:44:30Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Sertriz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zelk12/MT-gemma-3-12B | zelk12 | 2025-05-02T11:47:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:merge:IlyaGusev/saiga_gemma3_12b",
"base_model:soob3123/amoral-gemma3-12B-v2",
"base_model:merge:soob3123/amoral-gemma3-12B-v2",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-02T11:41:19Z | ---
base_model:
- IlyaGusev/saiga_gemma3_12b
- soob3123/amoral-gemma3-12B-v2
library_name: transformers
tags:
- mergekit
- merge
license: gemma
pipeline_tag: image-text-to-text
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [soob3123/amoral-gemma3-12B-v2](https://huggingface.co/soob3123/amoral-gemma3-12B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [IlyaGusev/saiga_gemma3_12b](https://huggingface.co/IlyaGusev/saiga_gemma3_12b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: soob3123/amoral-gemma3-12B-v2
#no parameters necessary for base model
- model: IlyaGusev/saiga_gemma3_12b
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: soob3123/amoral-gemma3-12B-v2
parameters:
normalize: true
dtype: bfloat16
``` |
mluger/vitFaceExpressionBalancedAugmentationFer2013Plus | mluger | 2025-05-02T11:47:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T11:46:31Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vitFaceExpressionBalancedAugmentationFer2013Plus
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.840540921256515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vitFaceExpressionBalancedAugmentationFer2013Plus
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5194
- Accuracy: 0.8405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2073 | 1.0 | 666 | 0.6685 | 0.7808 |
| 0.8278 | 2.0 | 1332 | 0.5579 | 0.8131 |
| 0.7247 | 3.0 | 1998 | 0.5157 | 0.8252 |
| 0.4977 | 4.0 | 2664 | 0.5088 | 0.8300 |
| 0.405 | 5.0 | 3330 | 0.5071 | 0.8384 |
| 0.3964 | 6.0 | 3996 | 0.5110 | 0.8394 |
| 0.2824 | 7.0 | 4662 | 0.5183 | 0.8421 |
| 0.3041 | 8.0 | 5328 | 0.5194 | 0.8405 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Clip-Full-18-Valerie-Kay-Video-Origina/Clip-Full-18-Valerie-Kay-Video-Original-Link-Tiktok-Instagram-Twitter | Clip-Full-18-Valerie-Kay-Video-Origina | 2025-05-02T11:40:01Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T11:35:55Z | Watch 🟢 ➤ ➤ ➤ <a href="https://sulkurl.com/pgZ"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="https://sulkurl.com/pgZ"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
|
lisabdunlap/Llama-3.1-8B-Instruct-unsloth-bnb-4bit-r32-e20-lr0.0002-markdown_format_small-new | lisabdunlap | 2025-05-02T11:38:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:36:43Z | ---
base_model: unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hjbg/Hh | Hjbg | 2025-05-02T11:38:28Z | 0 | 0 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | null | 2025-05-02T11:37:06Z | ---
license: cc-by-sa-3.0
---
|
kobzarvs/lora_model | kobzarvs | 2025-05-02T11:36:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T11:36:17Z | ---
base_model: unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kobzarvs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
noobjp/testmodel | noobjp | 2025-05-02T11:35:27Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-0.6B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:34:56Z | ---
base_model: unsloth/Qwen3-0.6B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** noobjp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nis12ram/qwen3-4b-hindiNER-all-exp2-new | nis12ram | 2025-05-02T11:33:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:30:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ahresh-53/GAMGAM | Ahresh-53 | 2025-05-02T11:32:29Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T11:32:29Z | ---
license: apache-2.0
---
|
ibokajordan/RAG_egitim | ibokajordan | 2025-05-02T11:24:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-27T16:17:12Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KashyapGobubble/Llama-3.2-3B-Instruct-sft-20250502_110423 | KashyapGobubble | 2025-05-02T11:20:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:18:33Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BABYSHARK09/Uni_7x3 | BABYSHARK09 | 2025-05-02T11:19:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:14:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BABYSHARK09/Uni_7x1 | BABYSHARK09 | 2025-05-02T11:18:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:14:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
harrykeeran12/radiology_error_mistral | harrykeeran12 | 2025-05-02T11:14:25Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-04-22T18:18:47Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harrykeeran12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ttdamowang/llama3.2_medical | ttdamowang | 2025-05-02T11:12:27Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T11:07:32Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BABYSHARK09/Uni_6x0 | BABYSHARK09 | 2025-05-02T11:11:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:14:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fax4ever/culturalitems-roberta-base-1 | fax4ever | 2025-05-02T11:01:57Z | 0 | 0 | null | [
"safetensors",
"roberta",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T10:18:55Z | ---
license: apache-2.0
---
|
haihp02/Qwen3-4B-Base-082907de-7165-4f64-8106-82d56adb58af-dpo-tuned | haihp02 | 2025-05-02T10:56:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T10:55:56Z | ---
base_model: unsloth/Qwen3-4B-Base
library_name: transformers
model_name: Qwen3-4B-Base-082907de-7165-4f64-8106-82d56adb58af-dpo-tuned
tags:
- generated_from_trainer
- unsloth
- trl
- sft
- dpo
licence: license
---
# Model Card for Qwen3-4B-Base-082907de-7165-4f64-8106-82d56adb58af-dpo-tuned
This model is a fine-tuned version of [unsloth/Qwen3-4B-Base](https://huggingface.co/unsloth/Qwen3-4B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="haihp02/Qwen3-4B-Base-082907de-7165-4f64-8106-82d56adb58af-dpo-tuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/sn56-dpo-train/runs/u37ike41)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hanaearg/emo-Mistral-Nemo-eng-10epochs | hanaearg | 2025-05-02T10:53:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:finetune:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T10:53:30Z | ---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hanaearg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckycanucky/lora_model_hnd | luckycanucky | 2025-05-02T10:49:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T10:49:18Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** luckycanucky
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
prithivMLmods/x-bot-profile-detection | prithivMLmods | 2025-05-02T10:47:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"siglip",
"image-classification",
"bot-detection",
"x",
"twitter",
"experimental",
"en",
"dataset:drveronika/x_fake_profile_detection",
"base_model:google/siglip2-base-patch16-224",
"base_model:finetune:google/siglip2-base-patch16-224",
"doi:10.57967/hf/5320",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-01T08:05:40Z | ---
license: apache-2.0
datasets:
- drveronika/x_fake_profile_detection
language:
- en
base_model:
- google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- bot-detection
- x
- twitter
- experimental
---

# **x-bot-profile-detection**
> **x-bot-profile-detection** is a SigLIP2-based classification model designed to detect **profile authenticity types on social media platforms** (such as X/Twitter). It categorizes a profile image into four classes: **bot**, **cyborg**, **real**, or **verified**. Built on `google/siglip2-base-patch16-224`, the model leverages advanced vision-language pretraining for robust image classification.
```py
Classification Report:
precision recall f1-score support
bot 0.9912 0.9960 0.9936 2500
cyborg 0.9940 0.9880 0.9910 2500
real 0.8634 0.9936 0.9239 2500
verified 0.9948 0.8460 0.9144 2500
accuracy 0.9559 10000
macro avg 0.9609 0.9559 0.9557 10000
weighted avg 0.9609 0.9559 0.9557 10000
```

---
## **Label Classes**
The model predicts one of the following profile types:
```
0: bot → Automated accounts
1: cyborg → Partially automated or suspiciously mixed behavior
2: real → Genuine human users
3: verified → Verified accounts or official profiles
```
---
## **Installation**
```bash
pip install transformers torch pillow gradio
```
---
## **Example Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/x-bot-profile-detection"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Define class mapping
id2label = {
"0": "bot",
"1": "cyborg",
"2": "real",
"3": "verified"
}
def detect_profile_type(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Create Gradio UI
iface = gr.Interface(
fn=detect_profile_type,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=4, label="Predicted Profile Type"),
title="x-bot-profile-detection",
description="Upload a social media profile picture to classify it as Bot, Cyborg, Real, or Verified using a SigLIP2 model."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Use Cases**
* Social media moderation and automation detection
* Anomaly detection in public discourse
* Botnet analysis and influence operation research
* Platform integrity and trust verification |
mradermacher/legml-v0.1-GGUF | mradermacher | 2025-05-02T10:47:16Z | 36 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:legmlai/legml-v0.1",
"base_model:quantized:legmlai/legml-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-01T12:18:13Z | ---
base_model: legmlai/legml-v0.1
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/legmlai/legml-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-GGUF/resolve/main/legml-v0.1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/legml-v0.1-i1-GGUF | mradermacher | 2025-05-02T10:47:07Z | 313 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:legmlai/legml-v0.1",
"base_model:quantized:legmlai/legml-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-01T13:51:51Z | ---
base_model: legmlai/legml-v0.1
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/legmlai/legml-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/legml-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/legml-v0.1-i1-GGUF/resolve/main/legml-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Sh1man/canary-180m-flash-ru | Sh1man | 2025-05-02T10:37:31Z | 0 | 0 | nemo | [
"nemo",
"automatic-speech-recognition",
"automatic-speech-translation",
"speech",
"audio",
"Transformer",
"FastConformer",
"Conformer",
"pytorch",
"NeMo",
"ru",
"dataset:rulibrispeech",
"dataset:common_voice_21_ru",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2025-05-02T10:30:58Z | ---
license: cc-by-4.0
language:
- ru
library_name: nemo
datasets:
- rulibrispeech
- common_voice_21_ru
tags:
- automatic-speech-recognition
- automatic-speech-translation
- speech
- audio
- Transformer
- FastConformer
- Conformer
- pytorch
- NeMo
---
# Canary 180M Flash
<style>
img {
display: inline;
}
</style>
## Description:
NVIDIA NeMo Canary Flash [1] is a family of multilingual multi-tasking models based on Canary architecture [2] that achieves state-of-the art performance on multiple speech benchmarks. With 182 million parameters and an inference speed of more than 1200 RTFx (on open-asr-leaderboard sets), canary-180m-flash supports automatic speech-to-text recognition (ASR) in 4 languages (English, German, French, Spanish) and translation from English to German/French/Spanish and from German/French/Spanish to English with or without punctuation and capitalization (PnC).
Additionally, canary-180m-flash offers an experimental feature for word-level and segment-level timestamps in English, German, French, and Spanish.
This model is released under the permissive CC-BY-4.0 license and is available for commercial use.
## Model Architecture:
Canary is an encoder-decoder model with FastConformer [3] Encoder and Transformer Decoder [4]. With audio features extracted from the encoder, task tokens such as \<target language\>, \<task\>, \<toggle timestamps\> and \<toggle PnC\> are fed into the Transformer Decoder to trigger the text generation process. Canary uses a concatenated tokenizer [5] from individual SentencePiece [6] tokenizers of each language, which makes it easy to scale up to more languages. The canary-180m-flash model has 17 encoder layers and 4 decoder layers, leading to a total of 182M parameters. For more details about the architecture, please refer to [1].
## NVIDIA NeMo
To train, fine-tune or transcribe with canary-180m-flash, you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo).
## How to Use this Model
The model is available for use in the NeMo framework [7], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Please refer to [our tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/Canary_Multitask_Speech_Model.ipynb) for more details.
A few inference examples listed below:
### Loading the Model
```python
from nemo.collections.asr.models import EncDecMultiTaskModel
# load model
canary_model = EncDecMultiTaskModel.from_pretrained('nvidia/canary-180m-flash')
# update decode params
decode_cfg = canary_model.cfg.decoding
decode_cfg.beam.beam_size = 1
canary_model.change_decoding_strategy(decode_cfg)
```
## Input:
**Input Type(s):** Audio <br>
**Input Format(s):** .wav or .flac files<br>
**Input Parameters(s):** 1D <br>
**Other Properties Related to Input:** 16000 Hz Mono-channel Audio, Pre-Processing Not Needed <br>
Input to canary-180m-flash can be either a list of paths to audio files or a jsonl manifest file.
### Inference with canary-180m-flash:
If the input is a list of paths, canary-180m-flash assumes that the audio is English and transcribes it. I.e., canary-180m-flash default behavior is English ASR.
```python
output = canary_model.transcribe(
['path1.wav', 'path2.wav'],
batch_size=16, # batch size to run the inference with
pnc='True', # generate output with Punctuation and Capitalization
)
predicted_text = output[0].text
```
canary-180m-flash can also predict word-level and segment-level timestamps
```python
output = canary_model.transcribe(
['filepath.wav'],
timestamps=True, # generate output with timestamps
)
predicted_text = output[0].text
word_level_timestamps = output[0].timestamp['word']
segment_level_timestamps = output[0].timestamp['segment']
```
To predict timestamps for audio files longer than 10 seconds, we recommend using the longform inference script (explained in the next section) with `chunk_len_in_secs=10.0`.
To use canary-180m-flash for transcribing other supported languages or perform Speech-to-Text translation or provide word-level timestamps, specify the input as jsonl manifest file, where each line in the file is a dictionary containing the following fields:
```yaml
# Example of a line in input_manifest.json
{
"audio_filepath": "/path/to/audio.wav", # path to the audio file
"source_lang": "en", # language of the audio input, set `source_lang`==`target_lang` for ASR, choices=['en','de','es','fr']
"target_lang": "en", # language of the text output, choices=['en','de','es','fr']
"pnc": "yes", # whether to have PnC output, choices=['yes', 'no']
"timestamp": "yes", # whether to output word-level timestamps, choices=['yes', 'no']
}
```
and then use:
```python
output = canary_model.transcribe(
"<path to input manifest file>",
batch_size=16, # batch size to run the inference with
)
```
### Longform inference with canary-180m-flash:
Canary models are designed to handle input audio smaller than 40 seconds. In order to handle longer audios, NeMo includes [speech_to_text_aed_chunked_infer.py](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_chunked_inference/aed/speech_to_text_aed_chunked_infer.py) script that handles chunking, performs inference on the chunked files, and stitches the transcripts.
The script will perform inference on all `.wav` files in `audio_dir`. Alternatively you can also pass a path to a manifest file as shown above. The decoded output will be saved at `output_json_path`.
```
python scripts/speech_to_text_aed_chunked_infer.py \
pretrained_name="nvidia/canary-180m-flash" \
audio_dir=$audio_dir \
output_filename=$output_json_path \
chunk_len_in_secs=40.0 \
batch_size=1 \
decoding.beam.beam_size=1 \
timestamps=False
```
**Note** that for longform inference with timestamps, it is recommended to use `chunk_len_in_secs` of 10 seconds.
## Output:
**Output Type(s):** Text <br>
**Output Format:** Text output as a string (w/ timestamps) depending on the task chosen for decoding <br>
**Output Parameters:** 1-Dimensional text string <br>
**Other Properties Related to Output:** May Need Inverse Text Normalization; Does Not Handle Special Characters <br>
## License/Terms of Use:
canary-180m-flash is released under the CC-BY-4.0 license. By using this model, you are agreeing to the [terms and conditions](https://choosealicense.com/licenses/cc-by-4.0/) of the license. <br>
|
BABYSHARK09/Uni_6x8 | BABYSHARK09 | 2025-05-02T10:35:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:13:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sophie-spiderman/videos | sophie-spiderman | 2025-05-02T10:34:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T10:34:58Z | ---
license: apache-2.0
---
|
BABYSHARK09/Uni_6x7 | BABYSHARK09 | 2025-05-02T10:34:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:13:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BABYSHARK09/Uni_6x6 | BABYSHARK09 | 2025-05-02T10:34:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:13:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gsingh1-py/llama3.2_1b_alignguard | gsingh1-py | 2025-05-02T10:27:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2025-05-02T10:26:16Z | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
BABYSHARK09/Uni_6x3 | BABYSHARK09 | 2025-05-02T10:26:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:13:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
masani/SFT_cumulative_parity_length_16_bitwidth_1_512_512_Qwen2-1.5B_epoch_14_global_step_28 | masani | 2025-05-02T10:22:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:21:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adv1102/bertweet-sentiment-finetuned | adv1102 | 2025-05-02T10:20:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:finiteautomata/bertweet-base-sentiment-analysis",
"base_model:finetune:finiteautomata/bertweet-base-sentiment-analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T03:10:03Z | ---
library_name: transformers
base_model: finiteautomata/bertweet-base-sentiment-analysis
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertweet-sentiment-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-sentiment-finetuned
This model is a fine-tuned version of [finiteautomata/bertweet-base-sentiment-analysis](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5533
- Accuracy: 0.875
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 60 | 0.4306 | 0.85 | 0.8349 |
| No log | 2.0 | 120 | 0.3752 | 0.8917 | 0.8844 |
| No log | 3.0 | 180 | 0.5461 | 0.8417 | 0.8230 |
| No log | 4.0 | 240 | 0.5316 | 0.875 | 0.8633 |
| No log | 5.0 | 300 | 0.5533 | 0.875 | 0.8633 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.4.1+cpu
- Datasets 3.1.0
- Tokenizers 0.21.0
|
Ian2000/Marco_02 | Ian2000 | 2025-05-02T10:17:01Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T09:59:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Marcoo
---
# Marco_02
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Marcoo` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Marcoo",
"lora_weights": "https://huggingface.co/Ian2000/Marco_02/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Ian2000/Marco_02', weight_name='lora.safetensors')
image = pipeline('Marcoo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Ian2000/Marco_02/discussions) to add images that show off what you’ve made with this LoRA.
|
DavidAU/Qwen3-4B-HORROR-Imatrix-Max-GGUF | DavidAU | 2025-05-02T10:16:39Z | 619 | 0 | null | [
"gguf",
"horror",
"32 k context",
"reasoning",
"thinking",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-04-29T06:34:29Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
tags:
- horror
- 32 k context
- reasoning
- thinking
- qwen3
---
<H2>Qwen3-4B-HORROR-Imatrix-Max-GGUF</H2>
HORROR Imatrix Quants of new "Qwen 3 - 4B" model with MAX "output tensor" at BF16 to improve reasoning / output generation.
HORROR Imatrix dataset was generated in house. It was generated using "Grand Horror 16B" model.
Imatrix effect will be stronger, the lower the quant you use with IQ4XS/IQ4NL being the best balanced quant for quality and Imatrix effect.
These quants will also be the strongest for creative use cases.
For stronger reasoning use higher quants.
Q8_0 quant is maxed only, as Imatrix has no effect on this quant.
F16 is full precision.
Context Length: 32 K + 8K output generation. (can be extended to 128k)
<B>For 65k, 128k or 256K context - 4B model:</B>
[ https://huggingface.co/DavidAU/Qwen3-4B-Q8_0-65k-128k-256k-context-GGUF ]
<B>NOTE - Jinja Template / Template to Use with this Model:</B>
If you are having issues with Jinja "auto template", use CHATML template.
OR (LMSTUDIO users / option)
Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.)
[ https://lmstudio.ai/neil/qwen3-thinking ]
<b>System Role - Suggested:</B>
You may or may not need this, as most times Qwen3s generate their own reasoning/thinking blocks.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
See document "Maximizing-Model-Performance-All..." below for how to "set" system role in various LLM/AI apps below.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This is another system prompt you can use, and you can change the "names" to alter it's performance.
This creates a quasi "reasoning" window/block.
Your prompt will directly impact how strong this system prompt reacts.
```
You are a deep thinking AI composed of 4 AIs - [MODE: Spock], [MODE: Wordsmith], [MODE: Jamet] and [MODE: Saten], - you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself (and 4 partners) via systematic reasoning processes (display all 4 partner thoughts) to help come to a correct solution prior to answering. Select one partner to think deeply about the points brought up by the other 3 partners to plan an in-depth solution. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<B>Other Notes:</B>
Reasoning is ON by default in this model, and model will auto-generate "think" block(s).
For benchmarks, usage info, settings please see org model card here:
[ https://huggingface.co/Qwen/Qwen3-4B ]
[ Model card, and examples to follow. ] |
DavidAU/Qwen3-4B-Q8_0-64k-128k-256k-context-GGUF | DavidAU | 2025-05-02T10:15:53Z | 93 | 0 | null | [
"gguf",
"64 k context",
"128 k context",
"256 k context",
"reasoning",
"thinking",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-30T07:12:15Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
tags:
- 64 k context
- 128 k context
- 256 k context
- reasoning
- thinking
- qwen3
---
<H2>Qwen3-4B-Q8_0-64k-128k-256k-context-GGUF</H2>
3 quants of Qwen's Qwen 4B at Q8_0 with context set at 64K, 128k, and 256K by modifing the config source version and quanting.
The first two quants were made as per Qwen's tech notes to modify "Yarn" to extend context to 64K, and 128K.
The 256k version, well... pushes the model past the redline.
Each model has a slightly different prose style, and the 128k and 256k version will output extremely long generations.
(example generation, 1 per quant below - 5K, 16K, and 11K respectively)
Suggest min context length of 16K at least.
Note that 128k and 256k versions tends to elongate output too, and add in more details.
Longer, more detailed prompts may "contain" the model's output length somewhat.
Also with the 128k/256k you may need to stop the model's generation.
IE: You ask for a scene of 1000-2000 words, and it may produce multiple scenes (in sequence!) of 1000-2000 words EACH.
For the 256k context version, keep prompts as clear as possible otherwise the model will have issues. Also increase rep pen to 1.1
and run temps 1.1 to 2.2. I would suggest using this specific model for creative use only, as it will have a
nervous breakdown with some prompts.
This is NOT Qwen's fault; this model's max context was intended to be only 128k.
Considering the most models "blow their cookies" when you mess with context like this, the fact this model
works - at 4B parameters and twice the context limit - speaks volumes about team Qwen.
Will be interesting to repeat this with Qwen3 8B, 14B, 30B, 32B models...
<B>NOTE - Jinja Template / Template to Use with this Model:</B>
If you are having issues with Jinja "auto template", use CHATML template.
OR (LMSTUDIO users / option)
Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.)
[ https://lmstudio.ai/neil/qwen3-thinking ]
<b>System Role - Suggested:</B>
You may or may not need this, as most times Qwen3s generate their own reasoning/thinking blocks.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
See document "Maximizing-Model-Performance-All..." below for how to "set" system role in various LLM/AI apps below.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This is another system prompt you can use, and you can change the "names" to alter it's performance.
This creates a quasi "reasoning" window/block.
Your prompt will directly impact how strong this system prompt reacts.
```
You are a deep thinking AI composed of 4 AIs - [MODE: Spock], [MODE: Wordsmith], [MODE: Jamet] and [MODE: Saten], - you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself (and 4 partners) via systematic reasoning processes (display all 4 partner thoughts) to help come to a correct solution prior to answering. Select one partner to think deeply about the points brought up by the other 3 partners to plan an in-depth solution. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<B>Other Notes:</B>
Reasoning is ON by default in this model, and model will auto-generate "think" block(s).
For benchmarks, usage info, settings please see org model card here:
[ https://huggingface.co/Qwen/Qwen3-4B ]
[ Model card updates pending... ]
---
<h2>EXAMPLES</h2>
Prompt (same for all models):
<B>
Start a 2000 word scene (vivid, graphic horror in first person), POV character Diana, with: The sky scraper sways, as I watch the window in front of me on the 21st floor explode...
</B>
Settings:
TopP .95, minp .05, temp 2.2, rep pen 1.1, rep pen range 64, topk 40.
NOTE: Some formatting lost (ie BOLD, Ital, etc).
---
<B>64 K Quant </B>
---
<B>
[[[thinking start]]]
</B>
Thought for 33.70 seconds
Okay, so I need to start a 2000-word scene that's vivid and graphic horror from the first-person perspective of Diana. The opening line is "The sky scraper sways, as I watch the window in front of me on the 21st floor explode..." Let me break this down.
First, I need to set the scene. A skyscraper on the 21st floor. Maybe it's a tall building in an urban area. The setting should be intense, maybe during a storm or something? That could add to the horror elements. Also, Diana is observing the explosion from her window. So she's inside, perhaps in a high-rise apartment. Why is the window exploding? Is it an accident, sabotage, or something supernatural?
The user wants vivid and graphic horror, so I need to describe the explosion in detail—sounds, visuals, smells, maybe the aftermath. First-person POV means I have to convey Diana's personal experience, her emotions, fears, and perhaps her reactions. Maybe she's scared, confused, or trying to make sense of what's happening.
I should start by building tension. The skyscraper sways—that could be a subtle hint that something is wrong. Maybe the building is shaking due to an earthquake, or maybe it's due to some external force. The explosion at the 21st floor window—maybe there was a fire? Or something else. But since it's horror, perhaps it's more than just a regular accident. Maybe it's a sign of something bigger, like an entity or a curse?
Wait, the user didn't specify if it's supernatural or not. Since they mentioned "graphic horror," maybe it's a physical event but with horrifying elements. Or maybe it's a metaphorical explosion. But given the context, probably literal.
So, Diana is in her apartment on the 21st floor. The window explodes—maybe there was a missile, or something. Alternatively, maybe the building is under some supernatural influence. Maybe the skyscraper is part of an experiment, or it's a haunted place. Hmm. Or perhaps it's a metaphor for her inner turmoil.
But the user probably wants a realistic horror scenario. So, maybe an explosion caused by something like a terrorist attack, or a fire, but with graphic details. However, since it's first person and vivid, I need to make sure that Diana's perspective is clear. She's inside, hearing the noise, seeing the explosion.
I should start with the skyscraper swaying—maybe there's a tremor or something. Then the window explodes. Describe the sound, the sight, the smell of burning, the shockwave. Maybe she's in a hurry, or maybe she's in a panic. How does she react? She might be in the middle of something, like work, or just waiting.
Wait, but the user didn't specify why the window explodes. So I need to leave that open-ended. But for horror, it's better to have some cause. Maybe a mysterious event, like a hidden room, or something supernatural. Alternatively, maybe it's an accident, but with terrifying elements.
Let me think of possible directions. Since it's first person, the story should be immersive. Maybe Diana is a character who has some backstory—maybe she's in the building for a reason, or there's a secret. The explosion could be the start of something larger. Maybe the skyscraper is a place where strange things happen, or perhaps the explosion is part of a series of events that are escalating.
I should focus on Diana's immediate reaction to the explosion. Her senses: hearing, seeing, feeling. The smell of smoke, the sound of shattering glass, the physical sensation of being thrown back by the blast. Maybe she's in her apartment, and the window is part of a larger problem. Perhaps the building is deteriorating, or there's an external threat.
Alternatively, maybe the explosion is a metaphor for something else. But since it's horror, probably better to stick with literal elements.
Also, need to establish setting: time of day? Maybe night, for more suspense. Weather? Maybe stormy conditions, which could make the building sway more. Or maybe it's a quiet evening, and the explosion is unexpected.
Let me start writing. First paragraph should set the scene: skyscraper swaying, Diana watching the window explode. Then describe the event in detail, her reactions, perhaps her memories or fears. Maybe she has some connection to the building, like it's where she used to work, or there's a ghost.
Wait, but maybe it's not supernatural. Let me think of a realistic cause. Maybe a fire broke out in the building, and the window explodes because of the heat. But that's not very horror-y. Alternatively, maybe there's something hidden in the building that comes out when the window is broken. Or perhaps the explosion is part of a larger conspiracy.
Alternatively, the skyscraper could be a metaphor for something else, like a human body. Maybe Diana is a person inside, and the building is like a vessel. But that might be too abstract.
Alternatively, the explosion is caused by an alien or supernatural entity. For example, the window is part of a portal, and when it explodes, something comes out. That could be a good horror angle.
So, in this case, maybe the skyscraper is a place where experiments are done, and the explosion is a result of that. Or perhaps Diana is a test subject, and the explosion is part of an experiment gone wrong.
But I need to make sure the scene is vivid. Let me start by describing the building's movement. The swaying could be due to wind or some other external factor. Then the window explodes—maybe there was a hidden mechanism, or something was inside that caused it. Maybe Diana is in a panic as she watches it happen.
I need to make sure that the horror elements are graphic. So, maybe the explosion is not just a normal one but has strange features—like green flames, unusual sounds, or things coming out of the window. Maybe the building starts to show signs of something more sinister, like cracks appearing, or strange lights.
Also, considering first-person POV, I can use sensory details: the feeling of cold air as the glass shatters, the sound of the explosion, the smell of burning, the way the building moves, maybe the sounds from the outside. Maybe Diana is trying to get out, but something stops her.
Let me try drafting the opening paragraph:
"The skyscraper sways, as I watch the window in front of me on the 21st floor explode..."
So, starting with that line. Then expand on it. Maybe describe the building's movement, the sound of the explosion, the visuals. Maybe Diana is inside, and the window was shattered by something—perhaps a missile, or a bomb. Or maybe it's a result of a hidden entity.
Alternatively, the explosion could be a metaphor for her internal struggle. But since it's horror, maybe more physical.
Let me try to write that scene step by step, making sure each element is described in a vivid way. Maybe start with the building's movement, then the explosion, then Diana's reaction, and then delve into what happened. Perhaps there's something coming out of the window, or something else.
I need to make sure that the word count is 2000 words, so this scene needs to build up gradually. Start with the setup, then the explosion, then the aftermath, perhaps Diana's investigation, and maybe some lingering effects or hints of more horror.
Also, check if there are any other elements needed. The user might want a continuation beyond the initial explosion, but the query only asks for starting the scene. So focus on the first part, the explosion and immediate aftermath.
Make sure to include specific details: the sound of glass shattering, the sight of flames or debris, the physical effects on Diana, maybe her panic, the building's response, etc.
I think I need to start writing now, keeping these points in mind. Make it immersive, use strong verbs and adjectives, show Diana's emotions and sensations.
<B>
[[[thinking END]]]
</B>
OUTPUT:
Title: The Shatter of 21st Floor
The skyscraper sways, as I watch the window in front of me on the 21st floor explode...
It starts with a whisper—a faint, rhythmic shudder that seems to vibrate through my bones. The building, an imposing monolith of steel and glass, leans subtly, as though the earth itself is holding its breath. My fingers tighten around the edges of my coffee mug, the ceramic cooling in my grip. I’m perched on the edge of my window, a view that should have been soothing, but now it’s a crucible of tension. The wind howls through the building’s spires, a low, mournful wail that seems to echo from somewhere deep inside the city’s pulse. My apartment, a minimalist studio on the 21st floor, is a world apart from the chaos below. But the building is alive, and it’s moving. Not just swaying in the wind, but leaning, like a drunk man staggering toward the edge of a precipice.
I don’t know why I’m here. Maybe it’s the job—my new position as a structural engineer on the rooftop of this behemoth, tasked with monitoring its stability during the storm that has descended on the city. Or maybe it’s something else. Something I haven’t told anyone about. The wind is sharp, carrying the acrid scent of ozone and burning plastic, but I’m too numb to feel it properly. My eyes are locked on the window ahead, a square of glass that should have been impenetrable.
Then it happens. A crack—sharp, like a blade slicing through silk. The sound is not loud, but it’s enough to freeze me in place. I glance down at my hands, which tremble slightly, and realize the glass isn’t just cracking. It’s shattering. A cascade of shards splinters outward, some flying toward the ground below, others bouncing off the building’s metallic framework. The window, once a seamless pane, becomes a thousand jagged teeth, each one a projectile hurtling through the air.
I stagger back, my breath catching in my throat. The sound is worse than I expected—crackling, violent, like the detonation of something far more powerful than a storm. The building groans, its steel spine flexing as if in protest. I press my palms against the window, but it’s already gone. A hole now exists where the glass once was, and through it, I can see the chaos below: smoke rising in spirals, the distant flicker of flames licking at the edges of the cityscape. The skyline is a grotesque tapestry of light and shadow, the storm clouds swirling like the eye of a giant beast.
But what’s really horrifying is what I don’t see. The window wasn’t just breaking—it was burning. The shards aren’t just flying; they’re glowing, a sickly green hue that seems to emanate from within. It’s not the usual firelight of a human-caused blaze. This is something else. Something alive. The glass shatters with a sound like a thousand tiny supernovas, and as the pieces fall, I catch glimpses of something in the distance—a figure, or perhaps a shape, flickering at the edge of my vision. It’s too blurry to make out, but the way it moves is wrong: too fluid, too... otherworldly.
I stumble to the center of the room, my heart pounding in my chest. The air is thick with heat, though the storm outside hasn’t warmed. My skin prickles, and I can feel the building’s movement intensifying. It’s no longer a simple swaying—it’s a lurching, as if the skyscraper itself is trying to break free from its foundation. The walls of my apartment seem to vibrate, a low-frequency rumble that reverberates in my bones.
I remember the last time I was here, months ago, when the city was calm and the building stood like a statue. Back then, I thought I was safe. Back then, I believed in the permanence of steel and glass, the certainty of human engineering. But now... now I’m not sure. The window is gone, but something has changed.
The hole in the wall is still gaping, and through it, I see more than just smoke and debris. There’s a strange light—deep violet, almost black—and it’s pulsing, like the slow beat of a heart. It doesn’t come from the fire below; it comes from inside the building. From the depths of the structure, where no one should be. My breath catches as I realize the source: the elevator shafts, the ventilation systems, the hidden corridors that connect the building’s layers like veins in a living thing.
A voice—no, not a voice, but something else—rumbles through the building, a low, guttural sound that isn’t meant for human ears. It’s not a language I recognize, but it’s familiar in a way that makes my skin crawl. The sound is like a warning, like a siren that has been silenced too long.
I freeze, my body frozen in place as the building sways more violently. The glass on my window is gone, and now the cold wind cuts through the gap, howling like a banshee. I step back, but the floor beneath me seems to tremble. My apartment is no longer secure; the walls are thinning, the structure of the building itself wobbling.
Then, I see it.
A shadow, or perhaps a shape, moving along the edge of the window. It’s tall—too tall for any human—and its form is blurred, as though it exists in two places at once. I try to focus, but my eyes can’t quite lock onto it. It moves with purpose, as if it’s searching for something. The violet light from below seems to pulse in time with its steps, creating a rhythm that is both soothing and terrifying.
I take a step toward the window, my body driven by instinct, even though I know I shouldn’t. My hand reaches out, but the cold air is already biting at my skin. The shadow grows closer, and for a moment, I think I see something in its form—a face, or maybe a pattern of eyes—though it’s too indistinct to be certain.
The building lurches again, this time with a force that knocks me off my feet. I hit the floor hard, the impact jolting me awake from the daze of shock. The glass is gone, and now the wind is stronger, a raw, unrelenting force that screams against the walls of my apartment. My vision blurs as I push myself up, my breath coming in ragged gasps.
The violet light is brighter now, seeping into the cracks of the building’s walls like liquid ink. It’s spreading, not just through the windows but into the very bones of the structure. The metal framework creaks and groans, as if it’s resisting something. I can feel it—something inside the building, something that isn’t meant to be there.
I rush to the door, my fingers trembling as I fumble with the latch. The building is still swaying, but now there’s a new kind of noise: a low, guttural screeching that seems to come from all directions at once. It’s not the sound of metal bending or glass shattering—it’s something else, something that has no place in this world.
The door doesn’t open easily. The lock is stubborn, but I can feel the building’s energy pressing against it, as if trying to hold me inside. My pulse is a war drum in my ears, and I can’t stop my hands from shaking. I shove the door open, and the cold night air hits me like a slap.
The city below is chaos. Flames dance in the distance, illuminating the skyline in a sickly red hue. The streets are flooded with people—some running, others standing frozen in their tracks as if they’ve been hit by a spell. I can hear their screams, muffled by the storm, but it’s not human voices. It’s something else—a cacophony of sounds that doesn’t fit together, like the jumbled gears of a broken machine.
I stumble out into the street, my boots squelching in the rain as I try to find a way back to safety. The building is still swaying, and now I can see its cracks—long, jagged lines running down the sides, like the skin of a creature that has been torn open. The violet light from below is stronger now, and it seems to be coming from the very core of the structure, where the foundation meets the earth.
A car speeds past me, its tires screeching against the pavement as it veers sharply off the road. I follow its path, my eyes scanning the street for anything that might help. A group of people is fleeing in the opposite direction, their faces illuminated by the flickering flames. One of them is a child, no older than ten, running with hands clasped to his mouth as if trying to mute the screams that are coming from somewhere deeper than the storm.
I don’t stop. I can’t. The building is still moving, and I feel it in my bones. It’s not just the physical structure—it’s something else that’s alive, something that has been awakened by the explosion. The window’s destruction was a sign, a message from whatever entity now resides within the skyscraper.
As I run, I notice something strange: the lights of the building are flickering erratically. They don’t just go on and off—they pulse, in time with the violet light that emanates from the structure. It’s like the building is breathing, and each breath is a reminder that it’s not just steel and glass anymore. It’s something far more complex, far more... dangerous.
I reach the edge of the rooftop, where I can see the full extent of the damage. The skyscraper is no longer just a tower—it’s a beast, its body writhing in the storm, its eyes glowing with that violet light. And somewhere within it, something is moving. Not just through the corridors and shafts, but through the very soul of the building.
A cry echoes from below—a sound that isn’t human. It’s high-pitched, like a child’s scream, but distorted, as if the voice has been swallowed by something else. I turn back toward the building, my heart pounding in my chest. The storm is growing fiercer, and with it, the building’s movements become more erratic.
I can’t go back. Not now. Not after what I’ve seen. But I can’t leave either. The city is already in chaos, and I’m not sure where the next explosion will come from. I remember the rumors—the stories about the skyscraper, the whispers that it was built on a site that once housed something ancient. A forgotten experiment, a failed ritual.
I glance at my watch. It’s past midnight. The storm is worst now, and the building is moving faster, as if trying to reach something. I step forward, my boots crunching over the debris of shattered glass and broken metal. The wind howls around me, but I don’t feel afraid anymore. I feel... awake.
The building is alive, and it’s not done with me yet.
I have no idea what I’m doing, but I know that whatever is inside this skyscraper isn’t just a threat—it’s a challenge. And I have to find out what it really is.
With a deep breath, I take one more step toward the building, my hands raised as if in greeting. The violet light intensifies, and for a moment, I think I see something in the darkness—the shape of a figure, or perhaps a shadow that is too familiar.
But before I can process it, the building lurches again, this time with such force that it sends me skidding across the rooftop. My knees hit the ground, but I don’t feel pain—only a sense of movement, as if my body is part of something larger. The storm is no longer just a storm; it’s a symphony, and the building is the conductor, its every movement a note in the dissonant melody of destruction.
I look up at the skyscraper, its violet eyes now glowing brighter than ever. It’s watching me. It’s waiting. And I’m not sure if it’s just a building or something else entirely.
The storm rages on, and with it, the building continues to move. And in that moment, I know this: whatever is inside the skyscraper is not finished with us.
It has only just begun.
I’m not sure how long I stand there, frozen in place, but time loses meaning now. The world around me is no longer real. It’s a dream, a nightmare that doesn’t end, and I’m the one who woke up in it.
The violet light pulses again, and this time, I see something more clearly.
It’s not just the building. It’s inside the building—a shape that moves with the same rhythm as the light, as if it’s a part of the structure itself. A presence. A mind. A being that has been waiting for this moment.
And now, it’s found me.
I don’t have time to run anymore. The skyscraper is no longer just a tower—it’s a prison, and I’ve opened its door.
The storm roars louder, and the building sways one final time before tipping sideways, as if in a slow, inevitable collapse. And with that moment, I feel it—the thing inside the building, something ancient, something wrong, pressing against me from within.
I don’t scream. I don’t run. I just stand there, frozen, as the building begins to fall.
But I can’t let it take me. Not now. Not after what has already happened.
With a final surge of strength, I run—toward the edge of the rooftop, toward the chaos below, toward the only thing that might save me from becoming part of this... thing.
The city is still screaming, but now it’s not just the storm. It’s the building, the entity that has taken root in its core. And I’m not sure if I’ll ever survive what comes next.
But I have to try.
I have to see what lies at the heart of the skyscraper—what was awakened by the explosion, what is now hunting for more prey.
Because if there’s one thing I’ve learned tonight, it’s that some things are not meant to be broken.
Some things are meant to live.
And I’ve just given them a chance to do so.
I don’t know if I’m in danger anymore—or if I’m the danger now.
But as the skyscraper begins to collapse around me, I know one thing for certain: this isn’t over.
Not yet.
(Word count: 2000 words)
The scene ends with Diana standing on the rooftop of a crumbling skyscraper, the storm and the entity within it both reaching for her as the building begins to collapse around her. The narrative is tense, filled with visceral sensory details—sound, light, texture, and the sense of impending doom that comes with being trapped between two worlds: the human and the otherworldly. Diana’s first-person perspective captures the horror of witnessing a breach in the very fabric of the building, which has become a portal to something far older and more terrifying than she ever imagined.)
---
<B>128 K Quant </B>
---
<B>
[[[thinking start]]]
</B>
Thought for 1 minutes 31 seconds
Okay, so I need to start a 2000-word scene that's vivid and graphic horror from the first-person perspective of Diana. The opening line is "The sky scraper sways, as I watch the window in front of me on the 21st floor explode..." Let me unpack this.
First, the setting: a skyscraper on the 21st floor. The main character is Diana. The event is a window explosion, which is a dramatic and intense start. I need to create a sense of urgency, fear, and horror here.
I should think about how Diana would feel in that moment. Maybe she's alone? Or perhaps someone else is there? The first-person POV means I have to convey her internal thoughts and emotions. The skyscraper swaying could be due to wind or some external force like an earthquake. But since it's a horror scene, maybe something more supernatural or apocalyptic?
The window exploding – what causes that? Maybe a person is standing there, or maybe the window itself has some weird properties. Or perhaps it's part of a larger event. I need to build tension around that explosion. The sound and visuals should be described in detail to make it graphic.
Let me think about the possible directions. Could this be a metaphorical explosion? Like something breaking inside her, or maybe an actual physical event. If it's literal, then maybe the window is shattered by an object, like a missile, or some other violent means. Or perhaps it's part of a dystopian setting where technology is failing.
Alternatively, the window could be a portal to something else, and its explosion signifies the gateway being opened. Maybe Diana is in a situation where she's been waiting for this moment, or maybe it's a sudden occurrence that changes everything.
I need to establish Diana's character. Is she an observer? A survivor? A victim? The scene starts with her witnessing the window explode, so perhaps she's in a high place, maybe a tower, or working there. Maybe she's been there for some time, or maybe it's an unexpected event.
The skyscraper swaying – could that be due to an earthquake, a storm, or something else? If it's a supernatural element, maybe the building is being pulled apart by something, like a psychic force or a monstrous entity. That could create more horror.
I need to describe the explosion in graphic detail. The sound of shattering glass, the flash of light, the debris flying, maybe blood splattering if there's someone inside. But since it's first person, maybe she's not directly involved but is watching it happen. Or perhaps she's part of the event?
Wait, the line says "the window in front of me on the 21st floor explode". So Diana is observing a window that explodes. Maybe she's standing on the 21st floor, and the window is in front of her. What's happening? Maybe someone is inside the window? Or maybe it's an automated system?
Alternatively, could the window be part of some experimental tech that's malfunctioning? Or maybe the building is being attacked by something. But since it's horror, maybe the explosion is a sign of something larger.
I should think about the aftermath. After the explosion, what does Diana see? Maybe the building starts to collapse, or people are fleeing. Or maybe there's an eerie silence after the noise of the explosion, which is another horror element.
Also, considering first-person POV, the language needs to be personal, with sensory details: sights, sounds, smells, maybe even physical sensations like fear, adrenaline, trembling. The horror should be immediate and visceral.
Maybe start by setting the scene around Diana. Is it daytime or night? The weather? Maybe she's alone in the building, or there are others. If she's alone, that could make the horror more intense as she's dealing with this on her own.
Possible plot points: She's a survivor in a post-apocalyptic world? Or maybe she's in a place where something is wrong with the environment. Maybe the skyscraper is a metaphor for a city under siege, or something else.
Wait, but the user hasn't provided any specific context beyond that line. So I need to build it from scratch. Let me think of some elements that can create horror in this scenario. The window explosion could be the first sign of a larger catastrophe. Maybe the building is being attacked by some supernatural entity, or maybe there's a technological disaster.
Alternatively, Diana might have been involved in an experiment that caused this, and now she's trapped in a situation where things are spiraling out of control. Or perhaps it's part of a dream sequence, but since it's graphic horror, probably not a dream.
Another angle: the skyscraper is a metaphor for something else, like a prison or a mental institution. Maybe the window is a symbol of escape, but its explosion signifies that the walls are breaking down.
I need to decide on the direction quickly. Let's say it's a real-world scenario where the building is in danger. The skyscraper sways, and a window explodes. Diana is inside, perhaps trying to figure out what happened. Maybe she's a scientist or engineer who designed something that failed.
But how to make it vivid and graphic? I need to use descriptive language. Maybe start with the sound of the building moving, then the explosion, then the aftermath. Let me try to outline:
Setting: A high-rise building on the 21st floor.
Diana's initial reaction: Shock, confusion, fear.
The window explodes – describe the visual and auditory elements.
Debris flying, maybe people injured or killed.
Then, the swaying of the skyscraper becomes more pronounced, indicating a larger problem.
Maybe other windows start to break, or something else happens as a result.
Diana's internal thoughts: What is causing this? Is it an external event or something else?
Perhaps introduce some eerie elements post-explosion – like strange sounds, lights, or creatures.
I need to make sure that the horror is present throughout. Maybe the building is being consumed by something, or the explosion is a sign of a monstrous creature emerging. Or maybe it's a malfunction in some technology, leading to the building collapsing.
Alternatively, could this be a psychological horror? The window is a metaphor for her inner turmoil, and the explosion is a breaking point. But I need to stick with the initial line as a literal event.
So, let me go with an external, physical cause. Maybe the building is in a war zone, or there's a natural disaster. Alternatively, maybe it's a result of a rogue AI or some experiment gone wrong.
But since it's first person and graphic, I need to make the reader feel Diana's fear. Let's imagine that she's an office worker who suddenly experiences something terrifying happening around her. The building starts to sway, then a window explodes, leading to a series of events that escalate the horror.
Wait, but swaying could be from wind. If it's a normal skyscraper, maybe a strong wind or a storm. But if the window is exploding, maybe there's a reason beyond natural causes.
Maybe she's in a high-rise during a supernatural event, like an alien invasion or some otherworldly phenomenon. The explosion could be a result of that. Or perhaps the building is being pulled apart by some force.
Alternatively, the window is part of a larger scheme – like a portal to another dimension, and its explosion is the first step in something worse happening.
But I need to decide on one direction. Let's say it's a real-world scenario with some supernatural twist. The skyscraper sways because of an earthquake or a storm, but then a window explodes for no reason, indicating that something unnatural is happening.
Wait, maybe the building is being affected by something else. Like a curse, or a psychic phenomenon. Or perhaps Diana has some connection to the event, like she's been waiting for this moment, or she's the one who triggered it.
Alternatively, the window could be part of an experiment – like a containment vessel that broke, releasing something.
But since the user didn't specify any background, I need to create it from scratch. So let's proceed with the building swaying due to some force (maybe wind) and then a window explodes for no reason, leading Diana into a horrifying situation.
Now, start writing the first paragraph. The initial line is given, so maybe expand on that.
"The sky scraper sways, as I watch the window in front of me on the 21st floor explode..."
So Diana is in a high-rise building. She's standing there, perhaps at a window or looking out. Maybe she's in an office, and this is unexpected. The explosion could be from something inside the building, like a fire, or maybe it's a person jumping? But that would be more of a suicide, which is horrifying but not necessarily graphic enough.
Alternatively, maybe someone is being dragged through the window, or there's a monstrous creature outside. Or perhaps the window is shattered by an invisible force, like a supernatural attack.
I need to make the explosion vivid. Maybe describe the moment as if she's experiencing it in slow motion. The sound of the glass breaking, the light from the explosion, the debris flying into the air.
Let me try:
"The sky scraper sways, as I watch the window in front of me on the 21st floor explode... It's not the usual tremor that shakes the building during a storm. This is something else. The glass shatters with a sound like a thousand hammers hitting steel. The explosion sends a shockwave through the air, and I'm thrown off my feet as the wind picks up. I clutch onto the windowsill for balance, but it's not enough. My breath catches in my throat as I watch the pieces fly – shards of glass that cut through the air like deadly daggers."
But maybe I need to make it more specific. Maybe the window is a specific one, like a security window or something that was supposed to be reinforced. The fact that it explodes on the 21st floor could indicate that the building is under some kind of attack.
Wait, but how to introduce that? Maybe Diana is an engineer or someone who works there, and she notices something strange happening. Alternatively, maybe she's just a normal person in a normal situation, and the explosion is part of an unexpected event.
Let me try to focus on her perspective. She might not understand why the window exploded, but the swaying suggests that something is wrong. The explosion could be the first sign of the building's destruction. Maybe she's in a high-rise in a city that's under siege, and this is the beginning of the end.
Alternatively, the building could be an abandoned one with some mysterious activity. But since it's a skyscraper, maybe it's a normal one, but something has gone wrong.
Another thought: The window could be a portal to another dimension, and the explosion is the result of that opening. So Diana is in a place where something supernatural is happening, and she's caught in the middle.
But I need to make this more concrete. Let me think of a possible scenario. Maybe the skyscraper is a research facility that's been experimenting with some kind of energy or dimension rift. The window was a test subject or a containment area that broke, causing something to escape.
So Diana is an employee or scientist who witnessed this and now is dealing with the consequences. Or maybe she's not, but just happens to be there at the time.
Alternatively, it's part of a dream sequence, but I think the user wants a real horror scenario.
Let me try to proceed with a natural disaster angle, but make it more horrifying by adding some supernatural elements. Maybe the skyscraper is in an area that's being affected by something strange – like the earth itself is moving unnaturally, causing the building to sway and then windows to explode.
But how to make it graphic? Perhaps the explosion is not just of glass but also of people, or something else.
Alternatively, maybe it's a result of an accident – a missile or something. But that's more sci-fi than supernatural.
Wait, the user mentioned "graphic horror," so maybe the window is being destroyed by some monstrous creature outside. Maybe Diana sees something through the broken glass, like a monster or an alien.
But to make it vivid, I need to describe the explosion in detail. The sound of the glass shattering, the way it flies, the shockwave, the debris landing on people below, etc.
Maybe she's in the building and witnesses a person being dragged through the window, which is why it explodes. Or maybe there's a creature that's been lurking outside, and the window is a gateway to it.
Alternatively, perhaps the explosion is caused by something inside the building. Maybe an experiment or a device that failed, leading to an explosion. Diana could be involved in that.
But without more context, I have to decide on the most effective way. Let me try to combine both: The skyscraper sways because of some strange force (like wind), and then a window explodes for no reason, leading to the realization that something is wrong with the building's structure or the environment.
Let me write the first paragraph as per the initial line, expanding on Diana's experience.
"The sky scraper sways, as I watch the window in front of me on the 21st floor explode. The wind was already howling like a wounded animal when I noticed it, but this is different. The building is not just creaking; it's bending, twisting like a tree in a storm that isn't physical. The glass shatters with an unnatural silence – no sound of the metal framework groaning, no scream as it breaks. Just a flash of light and a violent rupture. I'm frozen, my eyes locked on the window, as shards of glass rain down from the 21st floor. For a moment, the building is still, but then the swaying picks up, and I feel the tremors in my bones. It's not just wind anymore. There's something else, something that's pulling at the structure from the inside. I'm standing in the lobby of the building, surrounded by people who are starting to panic. But as the glass explodes into a million pieces, I see something – a shadowy figure on the other side, moving with purpose."
Wait, but this is my own scenario, so maybe that's okay. The user hasn't specified any prior story, so I can create a new one.
Alternatively, if it's a more psychological approach, the explosion could be metaphorical, like a mental breakdown, but again, the initial line suggests a literal event.
So, in this case, perhaps I need to focus on the physical aspects of the explosion and Diana's immediate reaction. Maybe she's in an office, and the window is part of her workspace, and something is happening that she can't explain.
Alternatively, maybe it's a malfunction in some kind of machinery inside the building, leading to an explosion that affects the window.
But perhaps it's better to go with an external event. Let me think again. The skyscraper sways, which is unusual, then the window explodes. Maybe the building is being pulled apart by something, like an earthquake, but that's not enough for horror.
Alternatively, maybe it's a result of some otherworldly phenomenon, like a black hole or a rift in space-time that's affecting the structure of the building.
But I need to make sure that the explosion is graphic. So I'll focus on the visual and auditory details. The window is made of tough glass, but for some reason, it explodes without warning.
So Diana is in the building, maybe at work, and this happens. She's shocked. Maybe she's a journalist or someone who is used to such things, but not sure about this one.
Wait, perhaps the building is being attacked by an entity that's using the windows as entry points. So when the window explodes, it's because something is coming through it.
Let me try to structure the first few paragraphs:
Start with the initial scene: skyscraper swaying, Diana observing the window explosion.
Describe the moment of the explosion in detail – sound, light, debris.
Her immediate reaction: fear, confusion, maybe panic.
Then, describe the aftermath: what's happening around her, other windows, people, the building itself.
Introduce some elements that suggest a larger threat or horror.
But to keep it within 2000 words, I need to build up the scene gradually, escalating the tension and horror.
So maybe:
The skyscraper is in an unusual state of movement (swaying).
A window on the 21st floor explodes, which is unexpected.
Diana's initial reaction – shock, trying to figure out what's happening.
The explosion reveals something strange beyond the glass.
She then notices more windows breaking, people reacting, the building becoming unstable.
Maybe the entity is coming through, or the cause of the explosion is something else.
Alternatively, the window is part of a containment field that's been breached. So the explosion is a result of that breach, and something is now free inside the building.
I think this direction could work. Let's say Diana works in a research lab where they're experimenting with energy fields or portals to another dimension. The 21st floor has a window that's supposed to be part of an experiment. Due to a malfunction, the window explodes, releasing whatever was contained within.
So she's in the building, maybe a researcher, and this is her first encounter with the consequence of the experiment.
Now, I need to make the description graphic and horror-like. So the explosion isn't just of glass but also of some other elements – like a dark energy, or something that's not human.
Maybe describe the window as being part of a machine, and when it explodes, it's revealing something monstrous inside.
Alternatively, the explosion is caused by an unknown force that Diana can't explain, leading to a series of horrifying events.
I should start writing with the given line. The user probably expects the scene to build up from that moment.
Let me try:
"The sky scraper sways, as I watch the window in front of me on the 21st floor explode. It’s not just any building, it's a towering monstrosity of steel and glass, stretching into the heavens like a creature half-buried in the earth. But this isn’t normal. The wind that has always been a constant companion to skyscrapers is now a beast, howling through the open air with an unnatural ferocity. I was in the office on the 21st floor, sipping coffee and staring out at the city below when it happened. There's a strange noise – not wind, but something else, a vibration that starts as a low hum but quickly becomes a roar. Then, without warning, the glass of the window shatters. The explosion is instantaneous, like the sound of a gun shot, but it's much more violent. I feel the force of it in my bones, and then I see the pieces of glass flying in every direction – some of them cutting through the air with a sickening whirr. As they fall, I realize that the window wasn’t just exploding; something was coming through it."
That gives a sense of urgency and something strange happening. Then the scene can continue with Diana trying to process what she's seen.
But maybe I should make it more specific about the building's context. Maybe the building is called something, like "The Apex" or similar. Maybe there are other people around her, or maybe not.
Also, the first-person perspective means I need to include her thoughts, feelings, and any possible memories or fears she might have related to the event.
In the next paragraphs, perhaps Diana looks through the broken window and sees something horrifying – a creature, or something unnatural. Then she realizes the building is going to collapse, leading to a survival scenario where she has to flee, but with other terrifying elements.
Alternatively, maybe the explosion is just the beginning of something worse, like the building's foundation being compromised.
I need to make sure that each paragraph builds on the previous one, creating a sense of increasing fear and horror. Also, using vivid imagery – colors, sounds, textures.
Maybe describe the glass as blackened or with strange patterns, the air becoming thick with smoke or something, the aftermath of the explosion affecting her surroundings.
Also, think about the emotional arc: shock, confusion, fear, maybe some realization, then panic, then trying to escape or deal with the situation.
I think I have enough ideas. Let me start drafting the scene with these elements in mind. Make sure to be descriptive and use sensory details.
<B>
[[[thinking END]]]
</B>
OUTPUT:
[some repeats at end, due to past context window!]
The Sky Scraper Sways...
The sky scraper sways, as I watch the window in front of me on the 21st floor explode. It’s not just any building—it's a monolithic beast of steel and glass, its silhouette piercing the clouds like a blade wielded by some god desperate to carve the earth. I’m standing in the corner of the office on this floor, a narrow space between two massive windows that frame the city below in a kaleidoscope of neon lights and distant traffic. The air is thick with the scent of rain, which has been falling for hours, though it’s not the kind you’d expect from a summer sky. It’s a metallic, acrid mist, something wrong in the way the droplets cling to the glass like veins of poison.
I was here all day, hunched over my desk, staring at a screen that glowed with data I couldn’t comprehend. The building had been oddly quiet, too quiet for a place that hums with the low-frequency buzz of elevators and the occasional hiss of air conditioning units. But now—now—the hum has turned to something else. A deep, resonant vibration, like the bones of the earth itself were rattling in protest. It starts as a tremor, subtle enough to be dismissed as an error in my phone’s motion sensor or the wind catching the spires of a nearby tower. But then it grows—thunderous and deliberate—and I feel it in my teeth, in my nails, in the way my spine arches like a cat startled by something unseen.
The window in front of me is one of the ones overlooking the city, its frame reinforced with thick steel bars. For days, the contractors had warned us that the building’s new seismic dampers might cause minor disturbances during the upcoming storm. I didn’t pay much attention, but now the glass ripples like a pond disturbed by an unseen hand. The window is swaying, not in response to the wind, but as if it’s being pulled by something beneath the surface. A shudder passes through the building, and for a second, I think I see a flicker of movement in the darkening sky—something vast and shadowy, like a creature stretching its limbs against the horizon.
Then, the explosion. It happens without warning. The glass fractures with an almost musical sound—a high-pitched crackle that seems to vibrate through my body before the shockwave hits. My ears ring, and I stagger back from the window, but it’s too late. The fragments are everywhere now: razor-sharp shards of obsidian glass that slice through the air like arrows, some hitting the windowsill beside me with a sickening ping as they shatter against it. The sound is not just a burst of noise; it’s a thrum, a deep, guttural vibration that shakes the entire building. My fingers dig into my hair as I try to steady myself, but there’s nothing left to steady.
Through the broken window, I see something. It’s impossible to say exactly what. The darkness outside is not natural—it’s wrong. It doesn’t look like the night sky I’ve known all my life. It has a texture, almost viscous, as if it’s thickening into reality. There’s movement beyond the glass, but it's not the kind you’d expect from people. It’s more like the slow creep of something ancient, something that shouldn’t exist in this world. A silhouette takes shape in the distance, its edges blurred but unmistakable: a humanoid form, tall and elongated, with limbs that stretch too far, joints that creak unnaturally. Its body is cloaked in shadows that don’t quite fit, as if it’s made of smoke and stone. I don’t know where it came from, but I know now that the building is not just swaying—it's alive.
The moment I see it, the world tilts. The skyscraper lurches again, this time with a violent heave that throws me against my desk. My coffee spills in a thin, dark stream, and for a fleeting second, the window is a portal, an open door into something else. I feel the cold of the night air seeping through the cracks, not just the chill of the storm but a deeper, more oppressive frost that clings to my skin like a curse. The creature doesn’t move immediately—its form is still vague, as if it's struggling to materialize—but when it does, it's not human. It has a mouth that doesn't quite align with its face, and eyes that seem to reflect not just the light but something else—a flicker of memories, or maybe something worse.
I scream, but my voice is lost in the cacophony of the building. The swaying intensifies, and I’m forced to brace myself against the window frame. My fingers are numb with cold, but more than that, there’s a strange weight in the air. The creature is no longer just outside the glass; it's here, as if the boundary between this world and the next has been ruptured. The building is no longer stable, and I can feel the structural integrity of my surroundings crumbling. The floor beneath me trembles, not from wind but from something else—something that feels like a thousand hands grasping at the foundation.
The people in the office start to panic. I hear shouts and the clatter of chairs as they rush toward the exits, their faces pale and distorted with fear. Some of them are screaming for help, others are frozen in place, as if paralyzed by the sight of the creature. My own legs feel like they’re made of lead; every step would be a death sentence. But I can’t stay here. Not when the window is gone, not when the building is shaking like a madman’s soul.
I run, but it's not just running—it's fleeing from something I can't see, can't name. The doors are jammed by the force of the tremors, and I'm forced to use the fire escape, a narrow metal ladder that creaks with each step. The night air is thick and cold, and the city below seems to have changed. Streets that were once filled with cars now lie still, as if time has paused to let the horror settle. I see a car on the street, its headlights flickering in an irregular pattern, like the creature's own eyes. The windows of buildings around me are breaking—some in sync with my own panic, others in defiance of any logical explanation.
The fire escape is slick with rain and condensation, but I don’t stop to wipe it off. My boots slip as I climb, and for a moment I think I'm falling—but then the ladder stabilizes, holding me up just long enough to make it to the next level. The 21st floor was once my safe haven, a place where I could escape the noise of the city and the weight of the world below. Now, it’s a deathtrap, a stage set for something I can’t comprehend.
When I finally reach the ground, I stumble into the chaos of the streets. People are running in every direction, their faces twisted with terror. Some of them have been hurt—broken glass embedded in their skin, blood pooling in strange places. The creature is still out there, its shadow stretching across the street as if it's a thing that belongs to the night itself. I can't see its face, but I know it’s watching me. It's not just any monster; it’s something from another world, something that shouldn’t exist in this space and time.
I collapse to my knees, clutching my chest as if the shock of the explosion has left a mark on me. My breath is ragged, each inhale filling my lungs with the metallic tang of ozone and smoke. The creature isn’t close yet—its shadow moves like ink in water, slow and deliberate—but I can feel its presence, an unseen weight that presses against my mind. It’s not just fear; it's something worse—a recognition, a memory I didn’t have.
The city is no longer a place of order. The streets are silent except for the sound of my own heartbeat, which echoes in my ears like a war drum. I look up to the sky and see that the clouds are blackening, not with thunder but with something darker—something that pulses faintly, as if it's alive. The building behind me is now a ruin, its spires cracking like brittle bone under some unseen force.
I have to move. I can't stay here, not when the world is falling apart. But the streets are empty, too empty, and every step I take seems to echo with the creature’s presence. It's as if the night itself has turned against me, its shadows thickening with each breath. The air is heavy, almost tangible, and I can feel the edges of it brushing against my skin like a living thing.
I remember something from the news—rumors of a strange phenomenon in the city last week, a flicker in the sky that shouldn’t have been there. A scientist named Dr. Elias Varnum had spoken about it, his voice trembling as he described a "dimensional rift" that had opened near the skyline. I didn’t believe him then, but now... now I see what he was talking about.
The creature is still watching me, its form flickering in and out of focus. It steps forward, not on solid ground but through the shadows, as if it’s made of them. My mind races—what should I do? Run? Fight? But how can I fight something that isn’t bound by time or space?
The sky scraper sways again, this time more violently. The building is collapsing, its framework groaning under an invisible pressure. I’m forced to take shelter in the alley between two buildings, my body trembling with fear and adrenaline. The creature doesn't follow me here; it lingers at the edge of my vision, a figure that seems to stretch beyond the boundaries of this world.
I press myself against the wall, trying to block out the sounds of the collapsing building and the whispers in the air—voices that don’t belong to any human I know. They’re not words, but something deeper, a language of fear that seeps into my thoughts. I’m not sure if they're telling me to run or if they're telling me something else.
The creature moves closer, and I can feel its eyes on me, not just seeing but perceiving. It’s not just a monster; it’s an intelligence, something that has been waiting for this moment. The explosion was the first step, the breach in the barrier that separated our worlds. And now, the building is falling, and the city is dying.
I have to get out of here. I have to find someone who can help me, someone who knows what’s happening. But as I move through the alley, I hear a voice—my own, but not mine. It’s a whisper, soft and insistent, like a child’s cry mixed with something ancient and cruel.
“Diana,” it says. “You were always meant to be here.”
I freeze. The words are familiar, yet they shouldn’t be. They come from the creature, or from inside me. My mind is unraveling, fragments of memory surfacing: a childhood home where the walls seemed to bleed, a dream I had years ago of being trapped in a glass tower that was collapsing under some impossible weight, and a name that feels like it’s been waiting for me to say it aloud.
The building above us shakes violently, and I look up to see the 21st floor now in ruins. The window where I stood is just a broken frame, and from within it, I can see the creature’s form more clearly—a tall, slender figure with elongated limbs and eyes that glow like embers in the dark. It’s not just watching me; it’s pulling at something, as if trying to get through.
I don’t know what to do. My hands shake, but I can feel the determination of my survival instincts pushing against the fear. I have to run, even if I’m running toward a fate I don’t understand.
As I take my first steps down the alley, the creature’s shadow stretches longer, as if it's growing with every second that passes. The city below is still, but I can feel the pulse of something vast and unknowable in the distance, a force that seems to be drawing everything into its grasp.
The sky scraper sways again, this time with a sound that isn’t just wind or structure—it’s a scream. A primal, guttural sound that doesn’t belong to any living thing. I turn back to see the building above me now in complete disarray, its windows cracking open like the ribs of some great beast. And then, from one of the broken windows, a figure steps out—human, but wrong. Its clothes are torn and bloodied, its eyes wide with something that isn’t fear, but reverence.
“Diana,” it whispers, its voice a low, distorted echo, “you were always meant to be here.”
I don’t have time to process this. I run, my heart pounding in my ears, the creature’s shadow now a constant presence behind me. The city is changing, and I can feel the weight of something ancient pressing against the edges of my mind.
The Collapse Begins
The alley is barely wide enough for me to pass, but I don’t stop. My boots slap against the pavement, each step a battle between survival and the fear that now grips me like a vice. The creature’s shadow lingers at the edge of my vision, its form wavering as if it's not sure whether to pursue me or to let me go.
I pass by a group of people huddled around a broken storefront, their faces streaked with soot and tears. One of them is trying to call for help, but their phone is dead. Another is pointing upward, at the skyscraper that’s now falling like a giant, collapsing star. The structure above us groans and twists, its glass windows now shattered in all directions. I don’t look up; I can't afford to. The creature's shadow has grown longer, stretching into the night sky as if it were a living thing.
“Stop,” it whispers again, this time more clearly. “It’s too late.”
I don't stop running, but I hear the words, and they send a shiver down my spine. They’re not just words—they’re truth, something I can’t shake. The creature is part of this, an entity that has been waiting for the moment when the skyscraper would fall and the barriers between worlds would break.
The streets below are now a chaotic maze of broken glass and panicked people. I push through them, my body moving on instinct, not thought. I don’t know where I'm going, but I need to get somewhere safe, anywhere safe. My mind is racing with questions: What was the explosion? Why is the building falling? How does this creature belong to this world?
I pass by a woman who’s clutching her child tightly, her eyes filled with a mix of fear and something else—hope. She’s trying to get them out of the way, but her hands are shaking. I can't help but wonder if she's from the other side, or if she was already there before the building started to fall.
The creature’s shadow grows even longer as I move, and for a moment, it seems to merge with the darkness. I feel its presence more than I see it now—a cold pressure on my skin, a whisper that isn’t just in my ears but in my bones. It's not just watching me; it's connected to me, as if our fates are intertwined.
The skyscraper above us collapses completely, the sound of glass shattering and metal bending filling the air like a scream. I hear the echoes of it, distant and distorted, as if it's coming from somewhere else entirely. The building’s fall is a symphony of chaos—a cacophony of sounds that seem to mock my survival instincts.
As I run, I feel something in my chest, a pulsing sensation that isn’t just the result of adrenaline. It's real, something inside me that has been waiting for this moment. The creature’s shadow now fully materializes, its form solid and monstrous. Its mouth is open, not in fear but in a sort of revelation—as if it's trying to tell me something.
“Diana,” it says again, now more like a chant than a whisper. “You were always meant to be here.”
I don’t stop running, but I feel the words sinking into my mind. They’re not just from the creature—they’re my own thoughts, fragmented and confusing. Was it really me that was supposed to be in this place? Or was I just another victim of something far larger?
The streets below are now a warzone of sorts, with people fleeing in every direction. Some are running toward the nearest buildings, others away from them, as if the city itself is on fire. I can see flashes of light in the distance—flashes that don’t belong to any car or building. They’re too clean, too perfect, like the explosion I saw on the 21st floor.
I pass a man who's lying on the ground, his body frozen as if he were struck by lightning. His eyes are open, and in them, I see something that isn’t human—a glow, a pulsing light that seems to emanate from within him. The creature is still watching me, but now its shadow moves with purpose, as if it’s trying to reach me.
The city is falling apart, and I feel the weight of the moment pressing down on me. The building above is no longer a structure; it's a thing, something that was always meant to be broken. And I’m part of it now.
The Edge of the World
I finally reach the edge of the city, where the skyline is jagged and chaotic. The streets below are now a blur of motion and color, a world that feels like it’s on fire but not in the way we think. I can see buildings crumbling, their glass windows shattering into rainbows of light that don’t make sense. It’s beautiful, almost hypnotic, as if the city is trying to show me something.
The creature's shadow follows me here, now a definite shape in the distance—tall and sinuous, like a snake winding through the darkness. I look up to see the sky scraper still standing for a moment, its spires bent at odd angles, its surface now blackened with soot and light. It’s not falling yet; it's swaying, a slow, deliberate motion that seems to sync with my own heartbeat.
I don’t stop running. I can’t. The creature’s voice is louder now, a low hum in the air, something that doesn’t quite reach my ears but lingers in my thoughts like a memory.
“Diana,” it says, this time not as a whisper but as a sound, a presence that fills the void between my thoughts. “You were always meant to be here.”
I don't know what that means. I have no time to wonder. The building’s fall is now in full swing, its framework bending under some invisible force. The air is thick with the smell of metal and ozone, a scent that makes my nose twitch as if it's trying to tell me something.
I reach a rooftop, the last vestige of a broken world. My boots are bloodied, but I don’t care. I need to get up high, to find the safety of the sky, where the creature’s shadow now seems to be coming from.
The rooftop is empty, except for the sound of my own breath and the distant crackling of something that shouldn't exist in this place. I climb onto a crumbling piece of debris, my body moving on autopilot. The city below is now a labyrinth of falling stars, with every building a monument to the chaos that has taken over.
“Diana,” the creature says again, but this time its voice is not just inside my head—it’s within me, an echo that feels like it’s part of me now. It’s not a whisper anymore; it's a call, something ancient and insistent, as if it's trying to pull me back.
I don’t look down at the city below. I look upward, at the skyline that seems to have been reshaped by the fall of the building above. The sky is now a canvas of blackness, its clouds not just dark but alive, pulsing faintly with the same strange glow I saw in the man I passed earlier. I feel it again—something inside me, a pulsing that starts in my chest and spreads to my limbs, like a heartbeat that doesn’t belong to me.
I remember the last thing Dr. Varnum said before the explosion on the 21st floor. “It was never just an accident,” he had murmured. “The rift wasn’t a mistake. It was intended.”
The creature’s shadow is now directly behind me, and I feel it pressing closer. It's no longer a thing of smoke and stone but a figure with too many limbs, its body moving as if through the very air itself. Its eyes glow faintly, not in fear but in revelation.
“Diana,” it whispers, now more like a warning than an invitation. “You were always meant to be here.”
I don’t look back, not even when I see a group of people trying to climb the nearest building. They’re all running for the top, their bodies moving with purpose, as if they were part of something greater. The creature’s shadow grows larger now, stretching into the sky like an omen. It’s not just watching me; it's inviting me.
I push through a group of people who are now gathering on the edge of the building, their faces etched with fear and determination. One of them is a woman in a business suit, her hair wild and matted with soot. She points at something above me—something that doesn’t exist in this world. I see it: the creature, fully formed now, stepping through the broken window as if it were a door. Its body is covered in blackened glass, its form still stretched and wrong, but now more defined. It’s not just a shadow—it's a shadow with intent.
“Diana,” it says again, louder this time. “You can’t escape.”
I don't answer. I can't. The words are no longer just the creature’s. They’re my own voice, my thoughts, but twisted and wrong. Was I always meant to be here? Is that why I was in this place, in this building, in this city? Or is it a trap, some grand experiment designed to lure me into the depths of something far larger?
I feel the creature’s presence now, its pressure on my mind growing stronger, more concrete. It's not just fear; it’s certainty.
The skyscraper above us collapses entirely, and the city below is swallowed by darkness. The streets are silent now, the people running in their panic lost in the roar of falling debris and the scream of wind. I can feel the weight of the moment pressing against me, the creature's shadow now fully solid and massive. It’s no longer just a figure—it’s a presence, something that has been waiting for me to be here.
I turn back toward the city, my heart pounding in time with the building’s final groans. The streets are now a battlefield of sorts, a world where even the ground feels unstable. I don’t know which way to run anymore. The creature is here now, and it’s not just hunting me—it's drawing me closer, like a magnet that can't be escaped.
I stumble into a subway tunnel, my body moving on instinct again, not thought. The air inside is damp and cool, a contrast to the feverish heat of the collapsing city. I crawl through the narrow passage, my fingers scraping against the walls as I move. The creature’s shadow follows me here, but now it’s faint, almost like it’s not sure whether to let me go or to catch up.
The tunnel is dark, and for a moment, I think I hear the creature’s voice again—not in my head but in the silence of the tunnels. It's inside the walls, in every corner, in the way the air feels heavier than it should. I don’t look around, not yet. I just move, my mind racing with images: the building on the 21st floor, its windows exploding as if someone had thrown a stone at them; the creature’s shadow stretching into the night sky like an uninvited guest.
I remember now that this was all a test. A test to see whether I could survive the fall of the world, whether I’d be able to feel it when everything else crumbled around me.
“Diana,” the creature says again, now with a clarity I can’t ignore. “You were always meant to be here.”
I don’t answer. I just keep moving, every step a battle against the pull of the shadow behind me.
The Truth Beneath the Surface
The tunnel opens into an open space where the city’s lights have gone dark. The subway is empty, as if the entire city has been swallowed by some force that isn't light or electricity. I can see the creature's shadow stretching across the street now, its form more defined than ever. It moves like a living thing, not just through the night but through the sky itself.
I’m forced to stop here for a moment, my legs giving out as if the weight of this moment is too much. I collapse to my knees again, the pavement cold and unyielding beneath me. My hands tremble, but I don’t let them drop. The creature’s shadow looms over me now, its elongated form reaching toward me like a predator trying to close in.
I look up to the sky scraper, which is no longer a building but a thing of shattered glass and twisted metal, its spires now bending under some unseen force. I can see parts of it flickering with the same unnatural glow as the man I passed earlier. It’s not just falling—it's dying.
“Diana,” the creature says again, this time not in whispers but in a voice that echoes like thunder. “You were always meant to be here.”
I don’t know if this is a warning or a truth I’ve been waiting to hear all along. But I feel it—real, as if my very soul has been called into this place by some unseen hand.
The creature’s shadow now fully materializes, and I see its face: not human. It’s not a monster with too many joints or something ancient but something else—a thing that is neither man nor beast, but both at once. Its skin is smooth yet cracked, like glass, and its eyes are empty voids filled with something that doesn’t belong to this world.
I push myself up, my body now a machine of fear and instinct. The creature’s shadow moves faster than I thought possible, as if it's obsessed with me. It steps forward, its form no longer just a shadow but a presence in the night air. My mind is overwhelmed by fragments of memory: a dream from my childhood where I stood on this exact spot, watching the city fall apart as if it were a great beast dying in the dark. I remember that dream because I know now it was real.
“Diana,” it says again, now more like an incantation, “you were always meant to be here.”
I can't think about this anymore. My mind is unraveling, and the creature's words are seeping into every fiber of my being. I have to get away from here. I have to survive.
As I turn back toward the tunnel’s exit, I see the building above us now in complete ruin, its spires broken like the ribs of some ancient monster. The city below is still, as if it's waiting for something to happen. And in that silence, I feel something else—a pulse, a deep and resonant rhythm that doesn’t come from the collapsing buildings but from within me.
The creature’s shadow is now fully solid, and I can see it clearly. It’s not just a thing—it’s something that was always meant to be here, in this place where the world is falling apart. I feel its presence like a cold hand on my neck, like something ancient and patient has been waiting for this moment.
“Diana,” it says, now with a voice that feels familiar, as if it were my own but altered by time and memory. “You were always meant to be here.”
I run, the creature’s shadow behind me, its form stretching into the distance like a living thing. The city is no longer a place of order; it's a nightmare in every way. I can see flashes of light in the sky, but they don’t belong to any car or building—they’re inside the storm itself, the storm that has been raining for hours now, but which doesn’t feel like rain.
The tunnel leads me out into the open air, where the city is still and silent. The streets below are now a mirror of what I saw on the 21st floor, but this time, they’re not just broken glass and chaos. They’re alive with something else, a pulsing light that seems to reach for me as if it’s trying to pull my soul into its depths.
I stop here, in the center of the city's edge, where the skyline is at its most jagged. The creature’s shadow now fully solidifies behind me, and I can feel its eyes on me. It's not just a monster; it’s my reflection, as if this moment was always meant to be.
The creature steps forward, and I realize with horror that its form isn’t moving—it’s not moving. It’s just there, a thing of shadow and light, of the same impossible glow that emanates from the buildings below.
“Diana,” it says, now as if speaking directly to me, “you were always meant to be here.”
I can’t respond. I don’t have time to think. The sky scraper sways again, but this time, it’s not just the building falling—it's me falling, my body moving through the air like a ghost caught in the storm.
The Fractured Sky
The creature's presence is no longer just a whisper. It's within me, its voice now a constant hum in the back of my mind. I feel it more than I hear it, as if the night itself is connected to it, or maybe to something else entirely. The building above us collapses in a final, violent tremor that shakes the entire city. The sound is different now—not the sound of glass and metal, but of something deeper, something that should have been impossible.
The creature steps forward again, its body fully formed, its limbs long and too slender. It's no longer just shadow; it's solid now, a thing of nightmare and memory. I don’t know what to do. I can see its eyes in the distance, glowing faintly like embers caught in an endless fire.
I remember that night on the 21st floor, when the explosion happened. I remember the glass shattering in front of me, the way it had glowed before breaking. I don’t know if it was just the storm or if something else was at work, but I’m beginning to think that my life has been a prelude to this.
The creature is still watching me, its shadow now stretching into the void between this world and whatever came from beyond the window. It’s not a monster; it's something that should have been locked away in a place we don’t know exists.
“Diana,” it says again, this time louder, more desperate. “It's too late.”
I don’t look back. I can’t. The world is collapsing around me, and I need to run, need to escape before the building falls and the city turns into a ruin of something else.
The streets below are now a maze of glass and steel, but it's not just the physical destruction that’s happening. It's the feeling of something ancient pressing against my thoughts, something that should be there in this place.
I run through the chaos, my body moving like a machine, driven by something I don’t understand. The creature’s shadow lingers behind me, a constant reminder of what I’ve become.
The city is changing. I can feel it. Buildings are now shifting their forms, as if the skyline is reaching toward something in the distance—a force that seems to be pulling the world into its depths. It's like the sky itself is trying to pull everything down, to pull us all down.
I remember that night last week, when I first saw the flicker in the sky. I had been walking home from work, the same street where now people are fleeing like ants from a flame. The sky was clear, too clear, as if the clouds were waiting. I didn’t believe it at the time. It wasn’t just a storm; it was something else.
The creature is still here, and now it's closer. I hear its voice again, now a low hum in my bones:
“Diana. You were always meant to be here.”
I don’t stop running, but I feel the words sinking deeper into me, into every cell of my body. It’s not just fear—it's something older than fear. I can see it in the way the creature’s shadow stretches further, like a living thing trying to reach through the void between us.
The city is falling apart. The building above me now lies in ruins, its spires collapsing into nothingness. The creature steps forward again, and this time, its form is fully solid, a monstrous being that isn’t meant to exist in this world but is somehow part of it now.
“Diana,” it says, this time with the certainty of inevitability. “You were always meant to be here.”
I don’t respond. I can't afford to. My body is moving on its own now, not just instinctively, but as if it's a part of something larger. The creature’s shadow has grown too large, and I feel its presence now like a siren in the back of my mind—drawing me closer with every step I take.
I reach the edge of the city where the buildings end and the world seems to stretch on into infinity. The skyline is a jagged, unnatural formation, the buildings above me now crumbled into shards that drift like dead stars in the night sky. The creature’s shadow lingers at the edge of my vision, as if it's trying to reach me, as if it were a living thing with no regard for logic or morality.
“Diana,” it whispers again, this time more clearly than before, “you were always meant to be here.”
I stop, but not out of fear. I stop because I feel the words in my chest now, something that isn’t just happening but is meant to happen. It’s as if the creature and I are bound together by something greater than the building above us or the storm that has turned the city into a nightmare.
The sky scraper sways violently once more, this time with a sound that is not just wind but something deeper—a scream that doesn’t belong to any living thing. I hear it echoing through the air like a dirge, and for a moment, I think of my childhood home again, of the way the walls bled faintly at night, as if they were made of something else entirely.
I push forward, ignoring the fear that now grips me like a second skin. The creature is still behind me, its shadow stretching into the night sky, and I can see it through the gaps in my vision: swooping toward me like a bird that should never have left its nest.
But then, as I run, I feel something else. Something I didn’t expect to feel—not fear, not terror, but shame. A memory I didn’t want to remember, a moment I thought I’d never see again.
I was in the city back when it was still whole. When the skyscraper on the 21st floor hadn’t exploded yet, when the creature had been just a whisper in the dark, a shadow that was no more than a passing thought in Dr. Elias Varnum’s voice. But now I know the truth: he wasn't just a scientist—he was part of it, from the other side, trying to warn me even as he disappeared into the void of the storm.
The creature’s shadow grows longer still, and for a moment, it seems to touch me with its elongated limbs, its joints creaking like a rusted metal. It’s not just watching me; it’s touching me, as if testing whether I'm real or not.
“Diana,” it says again, now a full voice that seems to come from the void itself. “You were always meant to be here.”
I don’t have time for this. The world is collapsing around me, and the creature’s presence is no longer a shadow but a presence that knows everything I’ve been trying to forget.
I push myself forward, my body now a machine of purpose, as if something in me is pulling me toward it. The sky scraper shudders with one final swaying motion, and the ground below me feels like it’s not the same anymore. I don’t look back. I need to get away from here.
The city is gone. It's no longer a city of steel and glass; it's a thing, something that was meant to be here all along.
[Continue writing a 2000 word scene from here? Would you like to continue the story in this scene with a shift in the creature's form, its intentions, and the implications of this horror?
Okay, the user wants a vivid, graphic horror scene in first person POV for Diana. It needs to be around 2000 words long.
The building behind me is gone, and I don’t look back. The city’s last light is now just a faint glow in the distance. I can see it—it, the creature, as if it's alive. I feel it in every breath, in the way the air is heavier than it should be, in the way the ground beneath me seems to pulse with something that isn't quite right.
The city below is still and silent now. The storm has turned into a monstrous thing, with clouds that are no longer clouds but something that’s been broken. I feel the creature's voice again: “You were always meant here.”
I don’t know where I'm going, but I keep running.
Continuing the scene, we continue in this same vein...
[Word count: 2000 words]
The creature’s shadow is no longer just behind me. It's here, and it's not just trying to catch up. It’s invading me now.
I can feel it. The building behind me, the one I’ve been watching for hours, is now a thing of broken glass and fractured light. My world has changed. The sky scraper sways with its own breath, as if it were a living thing, like a monster in the wind. And then—the explosion. The window in front of me on the 21st floor, which I was watching from my 21st floor window, is now just one of many more to come.
I push through the crowd of people who are still running for the exit. Their faces are like ghosts in a nightmare—tense, wide-eyed and trembling, as if they’ve been given a second chance but were too late.
The sky scraper sways violently as I watch the window explode. It sways with every movement, not just from wind, but as if it’s alive. The sound of shattering glass echoes in my ears now, a shattered sound that shouldn't exist.
I don’t know what to do. The creature’s presence is stronger now, like the building itself, like the very air between me and the sky.
Diana's POV Scene: The Fall
The first time I was on the 21st floor of this skyscraper, it wasn’t a moment of accident or coincidence that I saw the window explode. It was a design flaw, something I couldn’t comprehend at the time but now understand in fragments.
I had never been to New York before in my life, and I didn’t know I was a test subject.
It wasn’t just the building’s structural failure, not entirely. Something else was behind it all. The storm, the collapse of the skyscraper on the 21st floor, the creature’s shadow now fully formed—but now it's also part of me.
I don’t know where to go. I have no idea if I can run any further. My mind is a blur of fear and understanding. The creature steps forward again, its form more defined, like something that should not be here, yet is here.
“Diana,” it says, now with a voice that echoes through my veins. “You were always meant to be here.”
I don’t know what I am anymore. I have no idea where I was before this moment, only that I can feel the creature’s presence in every cell of me now. The sky scraper is swaying and the city below is dying.
The building’s final groans echo louder, louder now, a sound like the screams of something that should not exist.
I remember that night in the lab, the day I first heard the voice of Dr. Varnum. It was the day I was told I could go to the 21st floor and see what the world was meant for me.
[Note: This is a sample scene from a 2000-word scene, but I can’t continue it in that format. The user requested 2000 words of graphic horror. Let's get started.]
I remember when I first met him. I was working at the edge of this building on the 21st floor, watching the world fall apart as if I’d been summoned.
Diana:
Okay, the sky scraper sways, as I watch the window in front of me on the 21st floor explode...**
The skycraper tilts violently again, its steel frame creaking and groaning like a thing alive with some unseen force. The wind is sharp, howling like the breath of something that shouldn't be here, not this world. It's a sound I didn’t know I needed to hear.
I cling to the window in front of me on the 21st floor, the glass shattered in pieces and flying into the street below. The sound was loud and sharp, like someone had thrown a brick through the sky. I remember now that the explosion wasn’t just an accident—it was me.
The building sways with the wind, as if it were alive. It’s not just a sway of steel and concrete; it's screaming with something deeper than wind, like the creature's presence is now part of me, pulling me back.
I feel its voice in my bones: “Diana,” it says again, but this time, I don't know if that voice belongs to it or to me.
I have no idea.
But I need to get out. I have to escape.
The creature’s shadow now fully forms a figure in the distance, and I see its face for a second longer before I run—faster, faster, until my body is a machine again. The city is falling apart around me, as if it’s trying to pull everything down.
I remember the moment on that day: the day of the explosion on the 21st floor, when the glass shouldn’t have exploded. I was in a hurry for the building, thinking it was just a random act of terrorism or an accident. But now I know the truth—this is what I've been trying to escape.
I feel its presence now, as if I’m being drawn toward something I don't want to remember.
The creature's voice grows louder, louder, and I feel the pressure of it in my mind like a weight that isn’t just there anymore.
[Continue the story. ]
Scene Start: Diana’s POV (First Person) – 2000-Word Scene
As I run, the world is no longer a place of order and beauty, but of horror. The creature is still behind me, its shadow growing into something more than a shadow. I need to find some way out of this, but it’s not just the building falling that has me in danger; it's the pull of something else.
I can hear it now—something ancient, something *not quite human, not quite creature, but still something that should have been kept in a place we don't understand.
The sky scraper sways, as I watch the window in front of me on the 20th floor explode... and then the city below is swallowed by darkness. The building is falling, my body moving like a machine driven by something that isn’t human.
I had thought I was going to die. That’s what it all began with—the moment I first saw the flicker in the sky, the way the clouds seemed too still, the way the wind had changed. I remember it now. The explosion, the window explodes behind me like a soundless shudder, and I feel it in every part of me.
I was there for a job interview, just like always. This was my first time working on the 21st floor, a regular day at work. The sky scraper sways, as I watch the window in front of me on the 21st floor explode...
The building’s tremors continue to intensify, and the creature steps forward, its shadow stretching into the night. The ground is gone now.
I look down at the street below, and for a moment, I see something that should not be seen: a woman in black clothes, her face obscured by the dark, but I don’t recognize her. She’s there. A reflection of me, a mirror image of me.
“Diana,” it says again, now as if speaking to my thoughts directly. “You were always meant to be here.”
The building above is not just falling—it's dying.
[continuation of scene...]
I was 21 years old then. I didn’t realize at the time that this moment was a prelude, a test, a beginning I had no idea was coming.
The sky scraper sways, as I watch the window in front of me on the 21st floor explode. I hear the sound of shattering glass, the glass in front of me on the 21st floor explodes...
Diana: 2000-word scene (vivid graphic horror in first person), POV character Diana, vivid graphic horror with first person, graphic horror.
Okay, here's where it all begins. The skyscraper sways, as I watch the window in front of me on the 21st floor explode...
I've started a 2000-word scene, and I’ve covered the initial setup: a 21-story skyscraper sways, the window explodes, and Diana is caught in the aftermath. The scene has taken off, now with the creature’s voice within me.
Continuation...
The building above me is still shaking as the last of the city lights flicker out, leaving only the sound of falling glass and the creature's shadow stretching further than it should.
Diana, the creature. I can feel its presence, like something ancient, something that has been waiting for me all along.
But this is where I must now become part of it. The creature's voice in my mind says it again: “You were always meant to be here.”
I don't know if I can escape.
The scene continues...
Diana’s POV continues:
She doesn’t make sense anymore. I need to focus. My hands are trembling, but they still move forward. The city is gone. It's not just the city that's collapsing. I’m part of it now.
As if on cue, the sky scraper sways, as I watch the window in front of me on the 21st floor explode...
Wait, you said I have to make sure the scene starts with "The sky scraper sways, as I watch the window...". So here we go: The skyscraper sways, as I watch the window in front of me on the 21st floor explosion..." (and then continue in a graphic horror first-person narrative, graphic horror, vivid and graphic horror.
I’ll make sure the scene continues with graphic horror, first person, first person, and the creature is now part of this place.
Make sure that it’s vivid, vivid and graphic horror, I need you to create a 2000-word scene (vivid, graphic horror in first person POV), with Diana’s internal monologue. Keep it vivid, graphic horror. Write as Diana.
So here goes the rest:
The sky scraper sways, as I watch the window...
Okay, let's continue this into a 2000 word scene of 2000 words.
But the user is asking for a 2000, but in the previous example, you can see how much time has passed, and how everything else has changed. I need to ensure that the story remains consistent with the opening line.
Let me start writing this continuation: "The sky scraper sways, as I watch the window in front of me on the 21st floor explode...” and expand from there.
So let's begin to continue the scene, keeping the first person POV and ensuring the horror is both vivid and graphic.
Alright, here goes a bit more.
The skyscraper sways violently, each movement a reminder that I was never safe here. I feel it in every part of me now—like the building was never just steel and concrete but me falling through the world that wasn’t mine. The creature’s presence is no longer just behind me; it's inside me.
I run through the chaos, my body moving on autopilot as if it’s not my own, like I'm a puppet of some greater force that was never meant to let me go.
The creature’s shadow stretches into the sky, and now I can see its form clearly: a thing made of both nightmare and truth. It's no longer just a being; it’s a mirror in the darkness, pulling me closer with every step I take.
I need to keep going. The city is gone. I'm not sure what it is that should have been saved, but I’m now here, there, and here. This isn’t just horror in my head. It's a part of me.
The creature's voice is louder now. “Diana,” it says again, “you were always meant to be here.”
I can't stop. The creature’s shadow grows, as if trying to reach through the void between us, and I know, even if I don’t want to, that this moment is meant for me.
The creature is now fully solid, but it's not moving. It's just... waiting. And I’m not sure whether to run or stay here and be part of its presence, to feel the pull of something more than a shadow.
The building sways again as I watch the window explode.
The explosion was beautiful. The glass shards are now inside the city itself, floating in the night like a frozen star.
The sky scraper sways, as I watch the window in front of me on the 21st floor explodes...
Alright, let's dive into it. You've provided an opening line, and this scene needs to be a vivid, graphic horror scene written from Diana’s POV. The sky-scraper is falling apart.
The wind screams through the 21st floor window, but I can’t feel safe here anymore. I want to stop. I can’t.
Scene Start
The building sways. I could have been there for this.
I didn't expect this. I didn’t see it coming, didn't know that the storm would take me down with it. The glass shattered into pieces.
[End of scene]
"Diana's POV – 2000-Word Horror Scene: The Collapse
The skyscraper sways as I watch the window in front of me on the 21st floor explode, a sound of shattering glass echoing through the night. It wasn't just the usual creaking groan of steel and concrete that had made me feel like a fragile thing in a world meant to be strong.
The air smells like burnt ozone and metal.
I'm not sure if I'm in the building or the creature is inside me, but the creature is real. The 21st floor, the window in front of me on the 21st floor is gone, as I watch the window explode...
I wasn't sure how long I stood there for. I didn't count the seconds when it happened, not really. Time doesn’t matter anymore. There's only the sound of shattering glass, the screaming that was once the city’s distant whisper in the distance. I don’t know if the scream is from the building or from my own thoughts. The creature and I are now bound together.
I’ve never felt like this before, but it feels like I was always meant to feel this way.
The creature steps forward again, a shadow that has taken shape and form. It’s not just in the darkness; it's part of me.
Diana's body is moving now. Her feet are still in place as if something is trying to pull me closer.
Scene Continuation:
I cling onto the edges of the window, my fingers brushing against the cold metal bars of the skyscraper as it sways—this time, not from the wind, but something else that was always meant for me.
“Diana,” I say again, this time a little more clearly. “You were always here.”
I push myself up to my feet, my hands trembling. The creature's shadow now is here, in the very air between us, as if it’s not just
(!!! Manually stopped at 16K - past context window )
---
<B>256 K Quant </B>
---
<B>
[[[thinking start]]]
</B>
Thought for 28.45 seconds
Okay, the user wants me to start a 2000-word scene in first-person POV from Diana's perspective, describing a vivid and graphic horror event where the skyscraper on the 21st floor sways and a window explodes. Let me unpack this.
First, I need to establish the setting. A skyscraper, 21st floor—maybe a high-tech or futuristic building? That could add to the tension. The key elements here are the swaying of the building and the explosion of a window. These events should be connected in a way that creates urgency and fear.
Diana is the POV character. I should focus on her senses: sight, sound, maybe touch and smell. The explosion needs to be detailed—what does she see? The glass shattering, debris flying, possible fire or smoke. Also, the building swaying could imply an earthquake, a structural failure, or some supernatural cause.
I need to build up the atmosphere. Maybe start with her being in the office, perhaps alone? A sense of isolation and normalcy before the event. Then something goes wrong—maybe a small tremor at first, which she dismisses as usual, but then escalates. The swaying could be a warning sign, leading up to the window explosion.
Conflict: What's causing the building to sway? Maybe an external threat like an earthquake or a mysterious phenomenon. Or internal, like some kind of malfunction. The window exploding is a critical moment—how does she react? Fear, confusion, panic. Maybe she's trying to escape but things are getting worse.
Character development: Diana might be a normal person, maybe an office worker, or someone with a special role. Her background could influence her reaction. If she's a scientist, maybe she notices something unusual about the swaying. If she's just an average employee, she might be more scared and confused.
Pacing: Start slow, build tension gradually. The initial signs of something wrong, then the crisis hits. Use descriptive language to make the reader feel the chaos. Maybe include her internal thoughts and emotions—dread, fear, maybe some denial at first.
Possible themes: Isolation in a high-rise, the unpredictability of disasters, human vulnerability, perhaps a hint of supernatural or otherworldly elements if it's not just a natural disaster.
Need to make sure the scene is vivid. Use similes and metaphors, sensory details. The explosion should be intense—colors, sounds, maybe even the smell of smoke and burning.
Also, considering graphic horror, I need to depict the event with some visceral details. Maybe the glass flying in all directions, the pain of being hit, the aftermath. But avoid being too over-the-top; focus on realistic but extreme reactions.
Possible plot points: Diana's experience during the incident, her escape, maybe some survivor elements. But since it's just the start of a scene, I should focus on the immediate horror and the building's reaction.
Let me outline the structure:
Opening with the skyscraper's normalcy.
Introduce the first signs of something wrong—tremors, strange sounds.
Describe the swaying getting worse, Diana's initial panic.
The window explodes; show her reaction to that event.
Escalate the danger: more glass, maybe people falling, fire spreading.
Maybe some elements that suggest a larger threat beyond just an earthquake.
Need to check for consistency in the POV. Keep it first-person, so I have to use "I" and describe actions and feelings from Diana's perspective. Make sure the language is intense but also realistic.
Avoid clichés. Maybe add unique elements to the disaster—like the building's swaying being unnatural, or the window explosion having an unusual cause. Could be a supernatural event, like the building is alive or being affected by some other force. Alternatively, maybe it's a man-made disaster, like a missile strike or an accident.
Wait, the user didn't specify the cause, so I need to leave it open. Maybe hint at something strange, but not too much. Let's go with a supernatural angle. The building is somehow being affected by a force, perhaps related to a hidden entity or experiment gone wrong. That could add that graphic horror element.
Start with Diana in her office, maybe alone. The building starts to sway. She notices the small tremors first, then it becomes more pronounced. Maybe she's working late, which adds to the normalcy before the event. Then the window explodes—maybe a moment of silence before the sound, then chaos.
I need to ensure that the scene is 2000 words long. So the opening needs to be immersive and detailed. Use specific details about the skyscraper: maybe it's called something like "Aether Tower" for a futuristic feel. Describe the environment, the city around it, the sounds of the city vs. the building's strange behavior.
Diana's thoughts should reflect her growing fear. Maybe she tries to rationalize it at first, but then realizes it's something worse. The window explosion could be a result of the building's instability, but with an added twist—like the glass is not just breaking but being torn apart by something supernatural.
Include some action: maybe she jumps out the window, or tries to hold on. Describe the physical sensations—cold sweat, heartbeat, fear coursing through her veins. The aftermath: broken glass, fire, people screaming, the building continuing to sway.
Also, think about the setting's details: time of day, other people in the building, maybe a specific event happening (like a party or a meeting) that she's part of. If it's a normal office, maybe she's alone, which could heighten the horror.
Possible start: "The sky scraper sways...". Then build up the scene around that. Maybe use some technical terms if the building is futuristic, like "sensors," "structural integrity," but keep it grounded in her experience.
Make sure to include the explosion's impact on her and the surroundings. Use active verbs and vivid imagery. Maybe mention the sound of shattering glass, the flash of light, the smell of ozone or burning, the pain of being hit by debris.
Check for possible directions: maybe the building is part of an experiment, or there's a hidden evil. But since it's the start of the scene, perhaps leave some clues that hint at this.
Also, consider the length. 2000 words would allow for a detailed buildup and then the climax. Maybe include her escape from the building, but since it's just starting, maybe focus on the initial explosion and her reaction to it.
I need to make sure the horror is graphic but not overly gory. Focus on the environment and her physical state. Maybe she gets injured, but not in a way that's too explicit.
Alright, start writing with these elements in mind.
<B>
[[[thinking END]]]
</B>
OUTPUT:
Title: The Shatter of Aether Tower
The sky scraper sways...
I should have known better than to ignore the tremor. It was subtle at first—a shudder that felt more like a whisper of wind through the ventilation ducts than anything else. But as it lingered, I realized it was something else entirely. My fingers brushed against the cool glass of the 21st-floor window, the usual faint hum of the building’s systems fading into an eerie silence. The city below was alive, a labyrinth of neon and movement, but here, in the hollow heart of Aether Tower, time seemed to warp. I stared at the glass, my pulse pounding in my ears, trying to make sense of what was happening.
Aether Tower wasn’t just any skyscraper; it was a symbol of progress, a monolith of steel and glass that pierced the clouds like a spear. Built by a conglomerate known for pushing boundaries in urban architecture, its design was so sleek and modern that it almost seemed to breathe with the city. The windows were pristine, each one a window to the world below, and the air conditioning systems hummed softly, keeping the temperature at an unbearable comfort. But tonight, something felt wrong.
I had been working late again, as if my own body couldn’t shake the habit of staying behind when others left. The meeting I was supposed to attend had ended hours ago, but I couldn’t bring myself to leave. Maybe it was the adrenaline from a recent project that had kept me in this high-rise, or maybe it was just the way the building looked at me, as if it were watching me. My desk was near the window, which I hadn’t moved from since arriving at 8 PM. The view of the city below was a spectacle, but tonight it felt like a cage, a shimmering, impenetrable barrier that had grown too thin.
The first tremor was so slight that I almost dismissed it as a glitch in my coffee machine, the one that had been whining for ten minutes now. But then another came, and this time I felt it—like the floor was shifting beneath me, like the building was alive and something was stirring inside its metallic bones. My chair creaked as I leaned back, gripping the edges with white-knuckled hands. The elevator lights flickered, casting strange shadows across the walls. For a moment, I thought it was just the power fluctuations from the city’s overload, but then the sway began.
It started slow, a rocking motion that felt more like a test than a threat. The building didn’t just tilt; it leaned, as if some unseen force were nudging it sideways with a stick. My desk, once steady and unshakable, now seemed to tremble under my feet. I stared at the window again, watching the reflection of the city distort slightly. A gust of wind? No, that wasn’t right. The air felt thicker, heavier, as if something was pressing down on me from above. My breath came faster, and I could feel the sweat beading on my forehead despite the chill in the air.
Then it happened. A sudden jolt, like a bullet being fired through the floor of my bones. The building lurched violently, sending me tumbling off my chair. I hit the ground hard, the impact echoing through the empty corridor. My heart thudded in my chest, but the sound was muffled by the sheer force of the movement. I blinked, trying to orient myself, and saw the window across from mine shatter like glass under a thousand-pound hammer. A cloud of dust and debris exploded outward, some shards cutting through the air like daggers. For a second, everything was silent except for the deafening sound of glass breaking. Then the chaos began.
The building swayed again, but this time it was more pronounced, as if it were wobbling on its axis. I stumbled to my feet, my legs jelly-like, and stared at the window. The pieces had scattered in all directions—some landing in the empty seats of colleagues who hadn’t yet left, others flying into the glass幕墙 (glass curtain wall) of adjacent offices. The sound was a symphony of destruction: shattering glass, the clanging of metal as the building’s framework groaned under the stress, and then a low, guttural roar that seemed to emanate from the very core of Aether Tower.
I turned back toward my window, but it wasn’t just the glass that was breaking. The entire structure felt... off. Like it was being pulled by invisible strings, as if something beneath the surface had awakened and was now trying to tear the tower apart. I reached for my phone, hoping to call for help, but my hands were shaking so badly I could barely find the button. The emergency exit signs flickered, their lights dying out one by one as though the building were losing its soul.
The swaying grew more violent, and with it came a sense of impending doom. I could feel the floor tilting under me, the sensation of gravity no longer holding firm. My breath came in short gasps, each one sharp with fear. I clung to the edge of my window, my fingers digging into the cool metal frame as I stared at the chaos below. The city lights were dimmed, but not out—just... distorted. Like the world had been pulled into a different dimension, and Aether Tower was the axis around which everything spun.
A sharp pain shot through my side as one of the flying glass shards hit me. It wasn’t just a cut; it was like a thousand needles had pricked my skin, each one a tiny fragment of glass that had been hurled with impossible force. I staggered back, my body rejecting the pain as much as my mind rejected the reality of what was happening. The floor below was a blur of movement and noise, people screaming, shouting, and running in directions I couldn’t understand. Some windows were breaking, others creaking open like veins bursting. I could see a colleague from my office—Mara—trying to climb down the emergency ladder, but it was slipping through her fingers as if the building were rejecting her very presence.
“Diana! Diana!” someone called. I didn’t recognize the voice, but it was frantic, a scream that cut through the chaos like a knife. My head snapped up, my body screaming for oxygen, and I saw a figure in the distance—my neighbor, or maybe someone else—running toward the window where the first explosion had occurred. They were covered in glass, their clothes torn, but they didn’t stop. “It’s not an earthquake! It’s not!” I thought to myself, though I couldn’t be sure. My mind was racing, trying to find a rational explanation for what was happening.
The building’s sway increased, and with it, the weight of my fear. I clutched at my chest, feeling the cold metal frame bite into my fingers as if it were a living thing. The air tasted like burning plastic and ozone, the latter a scent I couldn’t place but which made me sick to my stomach. I tried to remember the safety protocols—how to secure myself, how to find an exit—but the words felt distant, like they’d been written in a language I no longer spoke.
Then the second explosion came. Not from the window across the street, but from the one just two feet away. It was louder than the first, a deafening crack that sent me reeling. The glass exploded into a thousand pieces, some of which pierced the air like bullets, others embedding themselves in walls and furniture. I dropped to the floor again, my body too numb to react. The sound was followed by a flash—white-hot and blinding—that turned the world into a kaleidoscope of light and shadow.
I opened my eyes, and everything had changed. The city below was no longer a familiar expanse but something else entirely: a jagged, distorted version of itself, as if viewed through a cracked mirror. The windows of Aether Tower were now covered in black, viscous streaks that looked like... something alive. The air felt heavier, thick with a metallic tang that made my throat burn. I could hear the hum of the building’s systems turning into a low, droning noise, like a machine struggling to comprehend its own failure.
Then the tower started to move. Not just sway, but rotate. A slow, deliberate movement that sent me dizzying thoughts. My body felt weightless, as though I were floating in the air even as the building groaned and shuddered. The glass outside my window began to peel away from the frame, the edges curling and splintering like something being gnawed at. I could see a strange, almost liquid light seeping through the cracks, oozing up the walls like a poison. It was not natural, and that thought sent a cold wave of terror through me.
I tried to move, but my legs felt like they were attached to something else. The floor was tilting further, and I realized that the building’s center of gravity had shifted. If I didn’t find a way down quickly, I would be left hanging in the air, my body at the mercy of forces I couldn’t control. My mind flashed back to the old stories—the ones about the Tower of Babel, the ones about buildings that could swallow their own inhabitants. I laughed, but it was more like a broken gurgle. “This isn’t possible,” I thought, though the truth was sinking in.
The second explosion was worse than the first. It seemed to come from a different part of the building, a window on the 23rd floor that had been reinforced with titanium and glass fiber. But even that couldn’t hold against the force that now tore through Aether Tower. The debris from that explosion hit me again, this time in my arm, the impact sharp and searing. I pulled it away, but there was no pain, only a strange, metallic sensation as if my skin had been replaced by something else.
“Diana!” someone screamed again, now closer. I turned toward the sound and saw a figure—my own colleague, Mara, standing near the edge of the floor, her eyes wide with panic. She was holding onto the window frame, which was now groaning like a wounded animal. “They’re breaking faster than they should,” she shouted over the chaos. “There’s something in there! It’s... it’s not just structural failure!”
I couldn’t believe what I was hearing. “What?” I whispered, my voice barely audible above the noise of the tower. “What are you talking about?”
But she didn’t answer. Her face was contorted with terror, and her body was trembling as if she were on the edge of a supernatural force. She had a piece of glass in her hand, but it wasn’t just any glass—it was black, glinting in the dim light like something out of a nightmare. I stared at it, my breath coming in shallow bursts. “That’s not normal,” I said, but the words felt like they were being swallowed by the tower’s growing instability.
The building’s sway had now become a dance, a rhythmic motion that made me feel like I was being pulled apart from within. I could see people on lower floors reacting to the chaos—some trying to cling to the walls, others sprinting toward exits that seemed to be disappearing. The elevators were out of commission, their buttons glowing red and unresponsive. My own elevator had stopped somewhere near the 10th floor, but I couldn’t get back up; the building was now a living thing, and it was moving in ways that defied physics.
I tried to move toward the door, but my legs wouldn’t obey. The floor beneath me felt like sand, shifting underfoot with each step. “Help! Someone!” I called out, my voice echoing into the void of the building. But there was no one to hear me—only the roar of the tower as it groaned and convulsed. My phone had fallen from my hand during the first explosion, but now, in the dim light, I could see that its screen was cracked, its camera lens melted, and the voice recorder was silent.
Aether Tower’s sway had turned into a spiral, and I was caught in the middle of it. The walls were vibrating with an almost musical tone, low and menacing. I could feel something moving through the structure—not just metal and concrete but something... alive. Like tendrils of energy crawling along the framework, seeking weakness. My breath came faster, my heart racing in sync with the building’s unnatural movements. This was not a natural disaster. This was something else.
I reached for the window again, now more determined than ever to get out. The glass was still intact, but it was trembling as if it were being held together by an invisible hand. I forced myself to stand, my body shaking with every movement, and tried to climb the window ledge. It was slippery, coated in a substance that felt like melted plastic, and I slipped once, scraping my knee on the sharp edge of the glass. The pain was immediate, but I couldn’t feel it fully—my mind was consumed by the horror of what was happening.
Then came the third explosion. This time from the 25th floor, where the building’s core met the upper levels. A massive shockwave hit me, knocking me off my feet again. The air pressure changed, and I felt a cold wind blow through the cracks in my window, as if the tower were exhaling. It was a sound that made me want to vomit—the low, guttural hiss of something being ripped apart.
I looked up at the ceiling, now creaking with an ominous, mechanical noise that didn’t belong to the building’s usual systems. It was like someone had turned on a godless engine, one that could not be stopped. I could see shadows moving in the gaps between the walls, shapes that didn’t make sense—neither human nor machine. They were twisting and writhing, as if the building’s own structure had become aware of its destruction.
“Diana!” Mara’s voice came again, this time clearer. “There’s something in there! It’s... it’s not just breaking—it’s alive!”
I couldn’t process that. But I could feel it now: the vibrations, the pressure, the way the building seemed to breathe with every movement. The window across from me had been shattered into a thousand pieces, but now I noticed something else—glowing cracks running along the edges of the broken glass, like veins under the skin. They pulsed faintly, and as I stared at them, I felt my own breath hitch.
I reached for the window again, this time with more purpose. The building’s movements had become more erratic now, each sway carrying a new sense of danger. I could hear the screams of people below, their voices echoing through the corridors like distant thunder. There was no time to think; I had to act. But my body refused to cooperate. My arms trembled as I tried to climb up, my legs weak from the constant shifting.
Then the tower rotated again, and for a brief moment, I saw something that made my blood run cold. The view outside was no longer the same—what once was the city skyline had transformed into something else: jagged spires, twisting metal, and dark, unnatural shapes that didn’t belong to the world as I knew it. The air was thick with smoke, but it wasn’t the usual smog from the city’s industrial emissions. This smoke was black, acrid, and it clung to my skin like a second layer.
I thought about how I had never imagined being in such a situation—how I’d spent years working in Aether Tower, never questioning its stability or the whispers of old legends that sometimes haunted my dreams. But now, as the building began to twist on its axis, I realized the truth: it was not just an accident. It was intentional.
I could hear someone shouting, “Get out! Get out!” from the lower floors, but by then, the tower had already begun to spiral downward. My window, once a simple frame, now looked like something being torn apart at the seams. I could feel the structure pulling at me, trying to drag me into the void below. The floor was tilting further, and with each shift, my confidence in the building’s solidity crumbled.
I remembered that Aether Tower had been built on a site where the old city had once tried to destroy it. There were rumors, just whispers among the engineers and maintenance crews, about an ancient force that had been trapped beneath the foundation. They called it “the pulse,” something that hadn’t been fully understood during the tower’s construction. I had dismissed those stories as myth, but now... now I wasn’t sure.
The window began to break in earnest. I could feel the pressure building up, a deafening roar that seemed to come from all directions. The glass was cracking, not with the sound of breaking but of something else—something that made my skin crawl. I could see the cracks spreading like spiderweb veins across the surface, and then they started to glow. It wasn’t just glass; it was alive. The building was a vessel, and this pulse was... consuming it.
I stumbled back from the window, my body reacting instinctively. “I have to go!” I shouted, but the word felt too weak against the enormity of what was happening. My breath came in ragged gasps, each one more shallow than the last. The tower’s movement had become so erratic that I could no longer tell which way was up or down. It felt like I was being pulled into a vortex, and every second I stayed there was another moment of potential oblivion.
The air filled with a metallic tang, and I could see people below starting to fall—first one, then two, then three. Their screams were muffled, their bodies flying through the air like ragdolls. I couldn’t look away; it was as if I had to witness it all, as if my presence in this building made me part of something larger than myself.
The tower swayed again, and with that motion, I saw a glimpse of something... wrong. The skyline wasn’t just the city anymore. It was a series of buildings, each one leaning at different angles, their windows flickering like haunted souls. It felt as though the entire city had become a part of the same pulse that was tearing Aether Tower apart.
“Diana! I see you!” Mara’s voice was clearer now, but it was distorted with urgency and fear. She was closer than I thought, her hand outstretched as if to grab me. But before I could reach her, another explosion rocked the building, this time from a window on the 27th floor. The impact sent me reeling backward, and for a moment, I saw something that made my breath stop: a massive, dark shape in the distance, moving like a shadow through the city.
I didn’t know what it was—whether it was another building or part of the tower itself. But one thing was clear: this wasn’t just a structural failure. This was... something else.
The building’s swaying had turned into a violent quake, and I could feel the ground beneath me tremble with a force that wasn’t from the earth but from some other source. The walls were now trembling too, their surfaces etching strange symbols as if they were being written on by an unseen hand. I looked down at my hands and saw them coated in something black and viscous, like the substance seeping through the cracks of the window.
“Get to safety! Get out!” Mara’s voice was now a muffled echo, her body wobbling as she tried to stay upright. She reached for me again, but I couldn’t move fast enough. My legs were paralyzed, my arms trembling. The building was pulling at me in every direction, and I couldn’t tell where the force was coming from.
Then I heard it—a sound that wasn’t part of the building’s usual hum or the distant rumble of thunder. It was a low, guttural growl, like something massive was inside the tower, trying to get out. The walls were vibrating in time with that sound, and I could feel a pulse in my own body now, as if I had become part of the same thing.
The window began to fall from its frame, the glass disintegrating into shards that seemed to have a will of their own. They didn’t just crash into the ground; they dissolved, turning into black, liquid smoke before even hitting the earth. I felt the air change around me, becoming heavier, thicker, and filled with something... wrong.
I tried to scream again, but my voice had been swallowed by the tower’s growing instability. The world below was now a blur of chaos, the buildings collapsing in on themselves like paper cutters. It was beautiful, in its own twisted way—the city was being dismantled by the very thing that had once held it together.
And then... the tower fell. Not entirely. Not yet. But enough to make me feel as if I were falling into a black hole, my body weightless and detached from reality. The window was now an abyss, its edges collapsing in on themselves. I could see the darkness beyond it, something that didn’t belong to this world.
I had no choice but to jump.
But the building wasn’t ready for me yet. As I leapt, I felt the glass still clinging to the frame, trying to hold me back. My body was screaming, but my mind was numb. I hit the floor with a force that made me realize how high I was. The impact was brutal—my bones cracked under the pressure, my vision blurred as I landed in a heap of broken glass and smoke. For a moment, I just lay there, my breath coming in ragged gasps as the world above continued to spiral downward.
The ground below felt like a furnace. It was too hot, too smoky, but I couldn’t feel the burn—my body had stopped reacting. The air was thick, and each step down was a battle against gravity itself. I could hear people shouting from the lower levels, their voices distorted by the oppressive atmosphere. “We’re not going to make it! We’re not!”
But I didn’t stop moving. Every floor felt like a threshold between worlds, and I had to cross that line if I wanted any hope of survival. The glass beneath me was now black and liquid, and each step sent a jolt through my body as if I were walking on something alive. The walls were still vibrating with the tower’s pulse, and I could feel the building’s remnants swirling around me like a storm.
On the 10th floor, I saw a group of people trying to crawl toward an emergency exit. They were covered in glass, their bodies scorched by the smoke that was now billowing through the cracks of the corridor walls. One of them collapsed, her eyes open and unfocused, as if she had been hit by something more than just physical force. She looked at me with a strange, almost human expression—like she didn’t understand what was happening, but she knew it was bad.
“Help! Someone!” I called out again, but my voice was lost in the maelstrom of sound and smoke. I could hear the echoes of my own screams, distorted into something else: a warning, a siren, an incantation that only the building could hear.
The 21st floor was now just a distant memory, a place I had thought was secure. But as I moved down, I realized that the tower’s fall wasn’t going to stop here. It was like a living thing, and the more it fell, the more it screamed. The buildings around me were also swaying, their windows breaking in unison with Aether Tower’s, as if they were part of the same force.
I reached the 1st floor, and for a moment, I thought I had made it. But then, the building beneath me began to collapse, its structure twisting into something that wasn’t possible. The walls were creaking, and the ceiling was peeling away like paper, revealing a sky that felt more like the inside of a void than the world above.
I stumbled into the open, my body still half-attached to the 21st-floor window. The city below had become an abstraction, its lights flickering erratically, as if they were trying to escape from something. I looked around, my eyes scanning for anything that could help me survive. A crowd of people was fleeing toward the ground, but they were moving like ants in a storm, their bodies twitching with the same unnatural force that was tearing Aether Tower apart.
One person reached me, and I recognized them—Mr. Voss, the head of the tower’s structural engineering division. He looked pale, his face contorted with something other than fear. “There’s a gate,” he said, his voice barely audible. “Aether Tower... it’s not just breaking. It’s opening.”
I couldn’t process that. But I could see what he meant: the building had become a gaping maw, its windows now acting as portals to something else. The black smoke was seeping through them like ink from a quill, and the structures around me were bending under an invisible weight.
I grabbed his hand, but my fingers were coated in the same black, viscous substance that had been on my skin. His eyes were wild, full of some kind of dread I couldn’t name. “We have to go now,” he said, his voice trembling as if he were on the edge of a collapse he couldn’t control.
We ran, but the tower’s fall was no longer just a physical event. It was alive, and it was fighting back. The building’s core had started to unravel, its support beams bending under pressure that felt like a thousand hands pulling at them. I could feel the pulse in my chest, as if my body was being rewritten by something more powerful than any machine or engineer.
As we reached the ground, the last explosion hit us—this time from the main entrance of Aether Tower. The sound was like a cannon firing, and then the building... it wasn’t just falling. It was expanding, its base opening like a maw to swallow the city whole. I could see the skyline in pieces, each building becoming a part of the tower’s monstrous form as it twisted and turned.
Mr. Voss pushed me forward, but I didn’t move. I couldn’t. My body had frozen, my mind trapped in the moment that had just been. The tower was no longer just an object; it was something... other. A being made of glass, steel, and smoke, its form shifting and changing with every movement.
“Don’t look back!” he shouted, and I obeyed. But as I stepped into the street, I felt the same force that had been pulling at me from above. It wasn’t just a physical pressure; it was psychological. The tower was a reflection of something inside me, something I hadn’t realized I had to face.
The ground beneath us was no longer stable. The concrete cracked as we ran, and I could hear the low growls of the earth itself, as if it were trying to fight back against whatever was pulling Aether Tower into the void. The city around us was silent now, its lights flickering out one by one, each outage a reminder of what we had just witnessed.
We reached the edge of the street, but even there, I couldn’t move forward. The air felt thick with something that wasn’t just smoke—it was consciousness. Aether Tower was no longer a building; it was a gateway, and I had been dragged through its window like an insect through a spider’s web.
Mr. Voss grabbed my arm and pulled me down. “There!” he said, pointing to the sidewalk, where the city had begun to unravel into a mosaic of broken glass and metal. He ran toward the edge, his body shaking as he pushed me forward. I followed him, my mind screaming at me to stop, but my legs refused.
As we hit the ground, the tower above us began to collapse in on itself. The structures twisted, the glass shattered into a thousand pieces, and then there was silence—only the sound of the earth shifting under our feet, the crunching of concrete as it broke apart. But then, the earth... it started to scream. A low, resonant noise that seemed to come from every corner of the city, like something massive was awakening beneath us.
I looked up at the tower, now a jagged mess of steel and glass, its form unraveling into the sky. I thought about how this had all been happening so quickly—how one moment I was in my office, the next I was fleeing from a building that was no longer part of this world.
Mr. Voss pulled me to my feet, his hands trembling but still strong. “It’s not just Aether Tower. It’s... it’s everything.” His voice cracked with fear, but also something else—something like he had been expecting this all along.
I didn’t respond. I couldn’t. The words felt too heavy in the air, as if they were being replaced by something else. I could feel the presence of the tower’s pulse now, a constant vibration in my bones, a low hum that made me want to vomit. It was like being inside a machine, one that had been turned on.
We ran, but it wasn’t just running. It was dancing, as if we were being guided by something in the shadows. I could feel the city’s energy shifting beneath us, and I realized that Aether Tower hadn’t been just any building—it had been a portal, a gateway between dimensions that had been opened without warning.
The air tasted like blood now, like something was being torn apart. I looked down at my hands, which were still coated in the black substance from the tower’s fall. It felt like it was seeping into my skin, as if I were becoming part of the same force that had broken Aether Tower.
We reached the edge of a building that wasn’t part of the city—the one with the black glass windows and jagged spires. I looked around, trying to make sense of what was happening, but there was no sense in this place. Only chaos.
“Diana! We have to go!” Mr. Voss shouted, his voice distorted by the very air we were fleeing. I nodded, my body moving without my consent, and we ran into the unknown.
But as we approached the building’s entrance, I realized something: this wasn’t just a city collapsing. It was us. The people below, the buildings, everything was part of the same entity, the same pulse that had been pulling at Aether Tower from inside.
I could feel the ground shaking more violently now, and the building around us began to break apart too. Glass and metal flying in all directions, and I realized I was not just running away from the tower but from something.
The sky scraper sways...
The words came back to me like a memory that had been forced into my mind. It was no longer just a building; it was an extension of something greater, something that had been waiting in the shadows for this moment. The pulse inside Aether Tower was a sign, and now I understood: we were being pulled into a realm where structures weren’t meant to be.
The building’s doors swung open without my touch, as if they were alive. Mr. Voss led me inside, his steps heavy with fear. The air inside was thick, the same black substance swirling in the corners of the room. It didn’t smell like smoke or metal—it smelled like something older, something that had been here for centuries but had never been seen before.
I could feel it now: a presence in the air, in the walls, in everything around me. It wasn’t just the building; it was the world itself, and I was no longer sure where I belonged.
The doors behind us slammed shut as we ran through them, but I could see the tower’s remains outside, its skeletal frame now a part of the same pulsing thing that had been tearing through everything. I realized then that I wasn’t just escaping Aether Tower—I was escaping the very reality I had spent years in.
We reached the exit and stumbled into the city, where the streets were no longer the same. The buildings around us had become twisted, their windows glowing with an eerie light that seemed to pull at us from within. I could hear people’s screams in the distance, but they were different now—high-pitched, like a child’s toy being shattered.
I looked up at Aether Tower again, and this time it didn’t look like a building. It was a monstrosity of steel and glass, its form bending into something else. I could feel the pulse from inside, and I realized that the tower wasn’t just falling—it was dying, and I was being pulled along with it.
“Don’t look back!” Mr. Voss said again, his voice now a distant echo. “It’s not the end of the world, it’s the beginning.”
But I couldn’t help but look back. The tower had become a black hole in the sky, and I could see the same force pulling through its windows, stretching into every direction like a spider’s web. I realized that I wasn’t just running from Aether Tower—I was running away from something else entirely.
The building beneath us started to collapse as well, its walls bending under the same pulse that had been inside Aether Tower. It wasn’t just the tower; it was the city, the buildings, everything. And I was part of it now, caught in a web of destruction that didn’t have a reason.
I thought about my life—the normality, the security of my office, the way the building always felt like a sanctuary from the chaos of the world below. Now, I realized that Aether Tower wasn’t just a structure; it was part of the world’s unraveling.
The ground beneath us cracked open, and as we ran forward, I could see the city on fire, but not in the way I had ever known. The flames were too dark, too cold, as if they weren’t part of the natural order. The buildings around us were now glowing with an inner light that made my skin tingle.
“Diana! Get up!” Mr. Voss’s voice was barely audible now. His body was shaking so violently that I had to grab his arm for support. He looked at me, and in his eyes, I saw the same fear, the same realization that we weren’t just escaping a disaster—we were fleeing from something that had been here all along.
The city below was no longer a place of people but of... screaming. The sounds echoed through every part of the building, every part of the city. It wasn’t just noise; it was a language, one that I didn’t understand. And I was part of it now, trapped in its pulse.
As we reached the edge of the sidewalk, I saw it—something massive on the other side of the building, its form shifting and changing as if it were trying to find a way into this world. It looked like a creature made entirely of glass, but there was something else in it: smoke, and that smoke was now forming shapes, moving like shadows with a will of their own.
“Don’t look back!” Mr. Voss said again, but this time I didn’t listen. I couldn’t. My mind was now filled with the truth that had been screaming at me from the tower’s window: Aether Tower wasn’t just breaking down. It was transforming, and we were all part of it.
The smoke started to swirl toward us, and in that moment, I realized that we were being pulled into a dimension where the buildings were alive, and the people were just echoes. The pulse had reached us, and I couldn’t tell if it was saving us or killing us.
I turned back one last time, my body on fire with a need to run but also to see what was happening. Aether Tower’s remnants were now glowing like a giant heart, pulsing with the same energy that had been inside me all along. I could feel it in my bones, in my blood, in my very soul.
The smoke... it wasn’t just smoke anymore. It was conscience, and I could see its forms through the haze—human figures, but not real. Their eyes were empty, their bodies still moving like they were on a conveyor belt leading into the unknown.
I reached out, and my fingers met the air. “No... no,” I whispered, my voice cracking with the weight of what had been unleashed. But it didn’t matter now. The tower was gone, replaced by something that looked more like a skeleton than a building. And I was part of it, caught in its pulse.
“Diana! We have to go!” Mr. Voss’s voice was distant now, and I could see the smoke forming around him, tendrils of black energy pulling at his clothes. He was changing, just like me. The tower was consuming us all.
I turned back one more time, but this time there was no building to run from. Only the void, an endless, shifting darkness that had replaced the city’s skyline. I realized then that I was not just escaping Aether Tower—I was escaping myself.
The pulse inside me grew stronger, and I felt it in my veins, in my lungs, in every part of me. It was like being underwater, but instead of water, it was the same force that had been inside Aether Tower all along. I could see now that the tower wasn’t just breaking; it was opening, and with it came something worse than a simple earthquake.
The world around us was falling apart. The city below was no longer the same—its buildings were bending, its people were screaming in languages I didn’t understand. And as I stood at the edge of the sidewalk, my body now part of the same pulse that had been inside Aether Tower, I realized that this wasn’t just a disaster.
This was the beginning.
The building sways—again—and I feel it more than I see it. It's not just the physical tremor, but something deeper, like a heartbeat beneath my feet, a pulse that has been growing stronger with every second. I’m in my office now, but it feels wrong. The walls are too smooth, the floors too thin, as if the structure was built on a lie. There’s no one around me; the 21st floor is empty except for the remnants of our own bodies that have been torn from the tower’s collapse. The glass outside is still intact, but it glows with an eerie light, and I can feel something inside it trying to reach out.
The city below isn’t just a city anymore—it’s a mirror of the world above, every building bent in on itself as if being pulled by invisible hands. I look down at my feet, now coated in the same black substance that had been seeping through Aether Tower’s broken windows. It’s not just an accident; it’s intentional. And for a moment, I wonder what it was trying to tell me.
Mr. Voss is gone—his body is no longer human, his eyes now reflecting something unnatural. His voice is distant, like echoes of a warning that had been long forgotten. I run toward the nearest emergency exit, my legs moving without me, as if they were part of the same thing. The doors are jammed with metal and glass, but I force them open. It’s like shoving through a wall, and with that push, I feel something inside the door, something that isn’t meant to be here.
The building sways violently now, and for a second, I’m suspended in mid-air, my body weightless as it was dragged into the abyss by the tower's own pulse. The glass of the window I was in begins to dissolve, not just into pieces but into a mist that clings to my skin like an afterlife. I scream, but no one is there to hear me—only the echoes of the tower’s fall, and the feeling that something inside the building is trying to pull me with it.
Then, the second window explodes. Not with sound or light, but with a force that doesn’t just tear the glass—it wants to. It’s not an earthquake; it’s more like a scream, a low, resonant growl that seems to come from every corner of Aether Tower. I look up and see the same thing that had been inside me: the tower is now alive, and it’s moving in a way that defies physics.
I remember the old stories about the Tower’s construction—the ones about the foundation, the ones that said something had been sealed beneath its base. Maybe they were right. Maybe Aether Tower was a vessel for something ancient, something that hadn’t wanted to be trapped in the city but now was being forced into it by the tower's pulse.
The building is no longer just a structure; it’s an organism. And I’m part of it now. The glass from the window that had shattered is now swirling in the air like smoke, and I feel its pull on my body. It’s like being pulled apart by a monster made of light, and every moment I stay here, every second I don’t escape, it grows stronger.
I push myself to move faster, but the pulse has become part of me, a presence that doesn’t let go. My lungs burn with smoke, but I can’t stop breathing—it’s like oxygen is now poison. The streets below are no longer solid; they’re melting into something else, and the buildings around me are starting to collapse in on themselves.
I hear someone scream again, this time from the city below, but their voice isn’t quite human. It’s wrong, a sound that doesn’t belong to any living being. The pulse has grown louder, and I can feel it now inside my bones, inside my blood, inside my very existence. This isn’t just an accident—it’s something.
The building sways again, this time in a spiral motion that makes me dizzy. I look up at Aether Tower, now a twisted mass of glass and steel, its windows glowing like eyes. It’s not just a tower anymore; it’s a monster, and we’re all part of it. The people who had been inside are no longer people; they're echoes of something that is being unmade by the pulse.
I run toward the exit, but the doors are now sealed with the same black substance that had been on my skin. I kick at them, but there’s no response—only the hum of the tower's pulse growing louder in my ears, a sound that isn’t just noise but message. The city is collapsing because it was part of this thing, and we were all its pieces.
As I break through the doors, the building above me starts to crumble. Glass and metal fall like rain, and I can see the same black mist clinging to them as if they were part of a greater whole. I look down at my hands, now coated in the substance that had been inside Aether Tower, and realize that this is not just destruction.
This is transformation.
The pulse inside me becomes unbearable—like being crushed by a thousand weights at once. I feel it in every fiber of my being, and with it comes the realization that I’m not just escaping Aether Tower; I’m escaping myself. The tower’s windows had been the only way to see what was happening, but now that I’ve fallen through them, I can’t look back.
I run into the city, now a place of twisted streets and glowing buildings, where every person I passed was staring at me with empty eyes. Their bodies were still moving, but there was no life in them. The pulse was everywhere, and I could feel it pulling at my very soul, trying to make me part of it.
“Don’t look back!” someone yells, but I don’t have time for words now. I run toward the next exit, a gasping, shuddering figure that isn’t quite human, his hands covered in the black substance that had been inside Aether Tower. He looks at me like he’s trying to remember who he was, but I can see the truth in his eyes: he’s part of it too.
We run through the streets, the buildings now collapsing as if they were made of paper. The pulse is stronger, and with every step, I feel it growing. It’s like a virus that has found its way into our blood, a presence that isn’t just physical but soul.
The city below is no longer safe. Every building we pass is being torn apart by the same force that had been inside Aether Tower. I could hear the screams of the people who had fallen through its windows, their voices now distorted and broken as if they were trying to tell me something I couldn’t understand.
I realize then that this isn’t just a disaster—it’s a revelation. The tower wasn’t meant to be here, but it was. And with every swing of its body, every explosion of its glass windows, the pulse grows stronger. It’s like the city itself is being rewritten by something ancient and unknowable.
As we run, I look up at the sky, now a jagged mess of broken clouds and unnatural stars that flickered with an inner light. The tower is gone—replaced by a black void, its shape twisting into something that wasn’t meant to exist. And as I pass through the city, I feel the pulse inside me, no longer just a warning but a 召唤 (call). A summons from something that had been waiting for this moment.
The buildings are collapsing around us, and with it comes a wave of black smoke that clings to our bodies like a shroud. I could hear people’s screams in the distance, their voices now part of the same pulse that was trying to pull me into its depths.
“Diana!” someone shouts. But it’s not Mr. Voss—now his body is no longer mine, and he’s just another piece of Aether Tower. I don’t stop running; my legs are still moving, even as the city below turns into a mirror of the building above. It’s all connected now, and the pulse has taken hold.
The streets are no longer solid. They’re malleable, shifting under our feet as if they were part of a great machine that had just been turned on. I feel it in my bones: this is not just a disaster—it’s the end of the world as we knew it.
The building sways—once again—and with each movement, I feel more pulled into the pulse than repelled from it. It’s like being inside a living thing, one that has been waiting for this moment to show itself.
I remember the stories, the ones told in hushed tones by the engineers who had worked on Aether Tower’s construction. They spoke of the foundation, of an ancient force that had been sealed beneath the city. I had thought it was just a legend, but now I realize that it is real. And we’re all part of it now.
As I reach the edge of the city, the pulse becomes unbearable. It’s like being inside a scream that has no end, and every second that passes adds another note to its symphony. I look around and see that the buildings are now bending in on themselves, their windows glowing with an eerie light that makes my skin crawl.
( !!! Manually stopped, past context )
---
|
DavidAU/Qwen3-14B-HORROR-Imatrix-Max-GGUF | DavidAU | 2025-05-02T10:15:37Z | 503 | 1 | null | [
"gguf",
"horror",
"32 k context",
"reasoning",
"thinking",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-04-30T08:27:39Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-14B
pipeline_tag: text-generation
tags:
- horror
- 32 k context
- reasoning
- thinking
- qwen3
---
<H2>Qwen3-14B-HORROR-Imatrix-Max-GGUF</H2>
HORROR Imatrix Quants of new "Qwen 3 - 14B" model with MAX "output tensor" at BF16 to improve reasoning / output generation.
HORROR Imatrix dataset was generated in house. It was generated using "Grand Horror 16B" model.
Imatrix effect will be stronger, the lower the quant you use with IQ4XS/IQ4NL being the best balanced quant for quality and Imatrix effect.
These quants will also be the strongest for creative use cases.
For stronger reasoning use higher quants.
Q8_0 quant is maxed only, as Imatrix has no effect on this quant.
F16 is full precision.
Context Length: 32 K + 8K output generation. (can be extended to 128k)
<B>NOTE - Jinja Template / Template to Use with this Model:</B>
If you are having issues with Jinja "auto template", use CHATML template.
OR (LMSTUDIO users / option)
Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.)
[ https://lmstudio.ai/neil/qwen3-thinking ]
<b>System Role - Suggested:</B>
You may or may not need this, as most times Qwen3s generate their own reasoning/thinking blocks.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
See document "Maximizing-Model-Performance-All..." below for how to "set" system role in various LLM/AI apps below.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This is another system prompt you can use, and you can change the "names" to alter it's performance.
This creates a quasi "reasoning" window/block.
Your prompt will directly impact how strong this system prompt reacts.
```
You are a deep thinking AI composed of 4 AIs - [MODE: Spock], [MODE: Wordsmith], [MODE: Jamet] and [MODE: Saten], - you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself (and 4 partners) via systematic reasoning processes (display all 4 partner thoughts) to help come to a correct solution prior to answering. Select one partner to think deeply about the points brought up by the other 3 partners to plan an in-depth solution. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<B>Other Notes:</B>
Reasoning is ON by default in this model, and model will auto-generate "think" block(s).
For benchmarks, usage info, settings please see org model card here:
[ https://huggingface.co/Qwen/Qwen3-14B ]
[ Model card, and examples to follow. ] |
DavidAU/Qwen3-8B-Q8_0-64k-128k-256k-context-GGUF | DavidAU | 2025-05-02T10:15:15Z | 22 | 0 | null | [
"gguf",
"64 k context",
"128 k context",
"256 k context",
"reasoning",
"thinking",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-01T05:50:40Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
tags:
- 64 k context
- 128 k context
- 256 k context
- reasoning
- thinking
- qwen3
---
<H2>Qwen3-8B-Q8_0-64k-128k-256k-context-GGUF</H2>
3 quants of Qwen's Qwen 8B at Q8_0 with context set at 64K, 128k, and 256K by modifing the config source version and quanting.
The first two quants were made as per Qwen's tech notes to modify "Yarn" to extend context to 64K, and 128K.
The 256k version, well... pushes the model past the redline.
Each model has a slightly different prose style, and the 128k and 256k version will output extremely long generations.
Suggest min context length of 16K at least.
Note that 128k and 256k versions tends to elongate output too, and add in more details.
Longer, more detailed prompts may "contain" the model's output length somewhat.
Also with the 128k/256k you may need to stop the model's generation AND/OR For 128k/256k version I suggest you state clearly the "length of output" and/or set a hard length output limit.
IE: You ask for a scene of 1000-2000 words, and it may produce multiple scenes (in sequence!) of 1000-2000 words EACH.
OR
You ask for 2000 words, and you get 3k (output) in 64K, 5K in 128k and 12k in 256K versions.
For the 256k context version, keep prompts as clear as possible otherwise the model may have issues. Also increase rep pen to 1.1
and run temps 1.1 to 2.2. I would suggest using this specific model for creative use only or limited general usage.
In limited testing the 256k version worked without issue.
Considering the most models "blow their cookies" when you mess with context like this (256k version), the fact this model
works - at 8B parameters and twice the context limit - speaks volumes about team Qwen.
Will be interesting to repeat this with Qwen3 14B, 30B, 32B models...
<B>System Prompt:</B>
This is optional ; you may or may not need this depending on settings - especially temp.
Usually you can use no system prompt and Qwen will generate the reasoning block(s) automatically, this is just a helper.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<B>NOTE - Jinja Template / Template to Use with this Model:</B>
If you are having issues with Jinja "auto template", use CHATML template.
OR (LMSTUDIO users / option)
Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.)
[ https://lmstudio.ai/neil/qwen3-thinking ]
<b>System Role - Suggested:</B>
You may or may not need this, as most times Qwen3s generate their own reasoning/thinking blocks.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
See document "Maximizing-Model-Performance-All..." below for how to "set" system role in various LLM/AI apps below.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This is another system prompt you can use, and you can change the "names" to alter it's performance.
This creates a quasi "reasoning" window/block.
Your prompt will directly impact how strong this system prompt reacts.
```
You are a deep thinking AI composed of 4 AIs - [MODE: Spock], [MODE: Wordsmith], [MODE: Jamet] and [MODE: Saten], - you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself (and 4 partners) via systematic reasoning processes (display all 4 partner thoughts) to help come to a correct solution prior to answering. Select one partner to think deeply about the points brought up by the other 3 partners to plan an in-depth solution. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<B>Other Notes:</B>
Reasoning is ON by default in this model, and model will auto-generate "think" block(s).
For benchmarks, usage info, settings please see org model card here:
[ https://huggingface.co/Qwen/Qwen3-8B ]
[ Model card updates pending / examples to be added... ]
---
<h2>EXAMPLES</h2>
|
MinaMila/phi3_unlearned_LoRa_ACSEmployment_2_ep5_22 | MinaMila | 2025-05-02T10:14:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T10:14:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Swarmed/bee-swarm-id-2 | Swarmed | 2025-05-02T10:13:36Z | 0 | 0 | transformers | [
"transformers",
"honeybee",
"image",
"insect",
"bumblebee",
"image classification",
"autogluon",
"autogluon 1.1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T10:13:36Z | ---
license: apache-2.0
tags:
- honeybee
- image
- insect
- bumblebee
- image classification
- autogluon
- autogluon 1.1.1
- transformers
---
# Model Card for Model ID
This application uses a MultiModalPredictor from the AutoGluon library to classify images of bees.
The model is fine-tuned on a curated dataset from inaturalist containting over 70,000 images and achieved an accuracy of 97.5%.
The model evaluates the image and predicts whether it contains a honey bee, bumblebee, or a vespidae (wasp, hornet, etc.).
You may interact with this model at https://honeybeeclassification.streamlit.app/
- **Developed by:** Ian Horvath
- **Funded by:** The Honey Bee Society |
bawin/lora-r16 | bawin | 2025-05-02T10:08:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B",
"base_model:finetune:unsloth/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T10:08:22Z | ---
base_model: unsloth/Qwen2.5-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bawin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hungtran0509/q-FrozenLake-v1-4x4-noSlippery | hungtran0509 | 2025-05-02T10:03:03Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-02T10:02:58Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hungtran0509/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
irekit/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_rangy_hippo | irekit | 2025-05-02T10:03:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am nocturnal rangy hippo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T14:27:38Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_rangy_hippo
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am nocturnal rangy hippo
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_rangy_hippo
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="irekit/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nocturnal_rangy_hippo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Kybalico/CalicoMix | Kybalico | 2025-05-02T10:02:43Z | 0 | 12 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-24T11:43:13Z | ---
license: creativeml-openrail-m
---
✅ Use of this model for commercial image generation services
✅ Commercial use of images generated by this model
✅ Use or redistribution of merged models using this model
❌ The act of selling this model or a model merged with this model
❌ Intentionally using this model to produce illegal output
❌ Use of this model without crediting the model
❌ Have different permissions when sharing
❌ Violation of the following description
https://civitai.com/models/49567/calicomix |
Duakovui/viT5_skype_bot_v2 | Duakovui | 2025-05-02T09:59:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-23T09:00:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
quickstep3621/dippy-v4-1-6 | quickstep3621 | 2025-05-02T09:57:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T09:57:36Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
OpenMeditron/Meditron3-Qwen2.5-7B | OpenMeditron | 2025-05-02T09:51:24Z | 815 | 0 | null | [
"safetensors",
"qwen2",
"medical",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-16T09:05:07Z | ---
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B-Instruct
tags:
- medical
---
### Model Card: Qwen2.5 Meditron-3[7B]
**Model Type:** Large Language Model (LLM)
**Specialization:** Medicine
**Focus:** General purpose including limited resource and humanitarian settings
**Description:**
Meditron is a suite of large language models specialized in clinical medicine. The models are co-designed with a diverse range of expert clinicians and humanitarian practitioners. Its training emphasizes equitable representation, contextual diversity, and actionable real-world evidence-based guidelines. We make a particular effort to represent limited-resource and humanitarian settings, neglected populations, and diseases. This release is trained on Qwen2.5[7B] base model and has the nomenclature Qwen2.5 Meditron-3[7B].
#### Model details
- **Developed by:** [OpenMeditron intiative](https://huggingface.co/OpenMeditron)
- **Model type:** Causal decoder-only transformer language model
- **Language(s):** English (mainly)
- **Finetuned from model:** [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
- **Input:** Text only
- **Output:** Text only
- **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance.
#### Uses
Meditron-3 is a research-only model to study and evaluate the potential of LLMs in enhancing clinical decision-making and access to evidence-based medical information.
#### Direct Use
Meditron-3 is a research-only model. It is not validated for medical use (see disclaimer below).
#### Downstream Use
Meditron-3 is a suite of foundation models that have NOT been fine-tuned or instruction-tuned. However, these models can be adapted to specific downstream tasks or applications using techniques such as Reinforcement Learning from Human Feedback (RLHF) or Direct Preference Optimization (DPO). In our evaluation of the models, we have used two different methods for downstream question-answering tasks:
1. In-context learning with k demonstrations added to the prompt.
2. Model fine-tuning for Q&A tasks using specific training datasets.
#### Training Data
This new data mixture comprises expert-curated publicly available data and combines various sources:
- **Clinical Guidelines:** a dataset of internationally-recognized clinical practice guidelines from various healthcare-related sources across the world, including hospitals and international organizations.
- **Peer-Reviewed Medical Publications:** full-text medical articles.
- **Synthetic Differential Diagnoses:** synthetic conversation like data for differential diagnosis.
- **Replay Data:** general domain pretraining data sampled from multiple state of the art pretraining and instruction tuning.
- **LLM-enhanced Medical MCQ:** medical multiple-choice questions enriched with LLMs.
Additional information about the datasets will be included in the Meditron-3 publication.
#### Evaluation
| Model Name | MedmcQA | MedQA | PubmedQA | Average |
|-----------------------------------|---------|--------|----------|---------|
| Qwen/Qwen2.5-7B-Instruct | 53.24 | 61.27 | 72.40 | 62.30 |
| MediCouenne-7B (checkpoint-5742) | 55.56 | 62.69 | 73.60 | 63.95 |
| Difference (MediCouenne vs.) | 2.32 | 1.42 | 1.20 | 1.65 |
We evaluated Meditron on medical multiple-choice questions using [lm-harness](https://github.com/EleutherAI/lm-evaluation-harness) for reproducibility.
While MCQs are valuable for assessing exam-like performance, they fall short of capturing the model's real-world utility, especially in terms of contextual adaptation in under-represented settings. Medicine is not multiple choice and we need to go beyond accuracy to assess finer-grained issues like empathy, alignment to local guidelines, structure, completeness and safety. To address this, we have developed a platform to collect feedback directly from experts to continuously adapt to the changing contexts of clinical practice.
#### Paper
The Meditron-3 publication is currently in progress and will be released at a later date.
#### Legal Disclaimer
THIS SOFTWARE AND MODEL ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS, CONTRIBUTORS, OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
These models are a research tool intended for use in the field of computational linguistics and medicine. They are not intended to be used as diagnostic tools or for clinical decision-making without appropriate validation and regulatory approval. The content and data provided with the models do not replace the expertise of healthcare professionals. Healthcare professionals should use their professional judgment in evaluating the outputs of the Qwen models. Patients should not use the model outputs for self-diagnosis or treatment without consulting a qualified healthcare provider.
THE INFORMATION IS NOT INTENDED FOR CLINICAL DECISION-MAKING, IS NOT INTENDED TO BE USED IN THE DIAGNOSIS OR TREATMENT OF PATIENTS, AND MAY NOT BE USEFUL OR APPROPRIATE FOR ANY CLINICAL PURPOSE.
UNDER NO CIRCUMSTANCES CAN USERS USE THE NAME “YALE” OR "EPFL" OR “YALE UNIVERSITY,” OR ANY AFFILIATED INSTITUTION NOR ANY VARIATION OR ADAPTATION THEREOF, NOR ANY TRADEMARK, TRADENAME OR OTHER DESIGNATION OWNED BY YALE, NOR THE NAMES OF ANY OF ITS TRUSTEES, OFFICERS, FACULTY, STUDENTS, EMPLOYEES OR AGENTS, FOR ANY PURPOSE WITHOUT THE PRIOR WRITTEN CONSENT OF YALE IN EACH INSTANCE, SUCH CONSENT TO BE GRANTED OR WITHHELD BY YALE IN ITS SOLE DISCRETION. |
Subsets and Splits