modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-03 12:30:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 466
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-03 12:30:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RobertoNeglia/pepe_generator_sd2_v2 | RobertoNeglia | 2025-05-28T05:03:21Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-05-27T23:56:54Z | ---
base_model: stabilityai/stable-diffusion-2
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - RobertoNeglia/pepe_generator_sd2_v2
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the RobertoNeglia/pepe_dataset dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
tralalelo/fine-tune-Llama3.1-8B-merged-4bit-28-5 | tralalelo | 2025-05-28T05:01:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-28T04:59:13Z | ---
base_model: unsloth/llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tralalelo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abhijeetalande12/youtube-roberta-classifier-v2 | abhijeetalande12 | 2025-05-28T04:58:26Z | 0 | 0 | null | [
"safetensors",
"roberta",
"text-classification",
"en",
"region:us"
] | text-classification | 2025-05-28T04:55:55Z | ---
pipeline_tag: text-classification
language: en
---
# YouTube Roberta Classifier
This is a fine-tuned Roberta model for classifying YouTube video descriptions or titles into categories.
|
Exclusive-Jobz-Hunting-Viral-Video-Telegra/Original.ull.Clip.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.Official | Exclusive-Jobz-Hunting-Viral-Video-Telegra | 2025-05-28T04:57:44Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T04:57:32Z | ---
license: apache-2.0
---
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️ ](https://the-goat-sanda.blogspot.com/p/goat-sanda-03.html)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️ ](https://the-goat-sanda.blogspot.com/p/goat-sanda-03.html)
<a href="https://the-goat-sanda.blogspot.com/p/goat-sanda-02.html"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a> |
ModSpecialization/mistral-7b-bnb-4bit-synthetic-creditcard-fraud-detection | ModSpecialization | 2025-05-28T04:56:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2025-05-27T16:17:25Z | ---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ModSpecialization
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Calvinm/oliviaquinnbest | Calvinm | 2025-05-28T04:55:09Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T04:55:08Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: OQBEST
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# OliviaQuinnBEST
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `OQBEST` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
thanhle1702/phi4-finetuned-sst2 | thanhle1702 | 2025-05-28T04:53:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-classification",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-27T18:59:42Z | ---
base_model: unsloth/Phi-4-unsloth-bnb-4bit
tags:
- text-classification
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanhle1702
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
yutakas/llava-v1.6-mistral-7b-hf-motogp-2025-teams | yutakas | 2025-05-28T04:53:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-27T22:55:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_2 | CodeAtCMU | 2025-05-28T04:48:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:47:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
18-Katrina-Lim-Viral-Video-Orginal/wATCH.Katrina.Lim.viral.video.Leaks.Official | 18-Katrina-Lim-Viral-Video-Orginal | 2025-05-28T04:48:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T04:48:10Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_8 | CodeAtCMU | 2025-05-28T04:46:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:44:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sxhui/gfb | sxhui | 2025-05-28T04:46:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T04:46:05Z | ---
license: apache-2.0
---
|
JpChi/FrozenLake | JpChi | 2025-05-28T04:45:56Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-28T04:43:41Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FrozenLake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="JpChi/FrozenLake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
mitra-mir/QA-model-ESG | mitra-mir | 2025-05-28T04:45:46Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:38:29Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mitra-mir
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_CPP_data_12K | CodeAtCMU | 2025-05-28T04:43:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:41:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hnv2520/excavator_Gemma3_12B_4bit_3e | hnv2520 | 2025-05-28T04:42:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-05-28T04:38:10Z | ---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hnv2520
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Watch-Full-video-Tummy-Tear-Dark-Video/Original.Full.Clip.Tummy.Tear.Dark.Viral.Video.Leaks.Official | Watch-Full-video-Tummy-Tear-Dark-Video | 2025-05-28T04:39:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T04:39:35Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_5 | CodeAtCMU | 2025-05-28T04:36:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:34:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aasdasd/manchu_test_model | Aasdasd | 2025-05-28T04:33:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-28T04:32:51Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_12K | CodeAtCMU | 2025-05-28T04:33:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:32:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
im-scarlettjames/newscarlett | im-scarlettjames | 2025-05-28T04:33:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T04:32:54Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: imscarlett
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# newscarlett
<Gallery />
## Model description
## Trigger words
You should use `imscarlett` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/im-scarlettjames/newscarlett/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_1 | CodeAtCMU | 2025-05-28T04:30:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:29:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Soldsoul86/aatma | Soldsoul86 | 2025-05-28T04:29:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T04:29:08Z | ---
license: apache-2.0
---
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_Python_data_12K | CodeAtCMU | 2025-05-28T04:27:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:26:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maple-research-lab/LLaDOU-v0-Code | maple-research-lab | 2025-05-28T04:23:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llada",
"custom_code",
"en",
"dataset:KodCode/KodCode-V1-SFT-R1",
"arxiv:2505.10446",
"base_model:GSAI-ML/LLaDA-8B-Instruct",
"base_model:finetune:GSAI-ML/LLaDA-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-23T11:38:56Z | ---
base_model:
- GSAI-ML/LLaDA-8B-Instruct
language:
- en
library_name: transformers
datasets:
- KodCode/KodCode-V1-SFT-R1
---
# Large Language Diffusion with Ordered Unmasking (LLaDOU)
<a href="https://arxiv.org/abs/2505.10446"><img src="https://img.shields.io/badge/arXiv-2505.10446-b31b1b.svg" alt="ArXiv"></a>
<a href="https://arxiv.org/abs/2505.10446"><img src="https://img.shields.io/badge/GitHub-LLaDOU-777777.svg" alt="ArXiv"></a>
We introduce the **L**arge **La**nguage **D**iffusion with **O**rdered **U**nmasking (**LLaDOU**), which is trained by reinforcing a new reasoning paradigm named the **D**iffusion **C**hain **o**f **L**ateral **T**hought (**DCoLT**) for diffusion language models.
Compared to standard CoT, DCoLT is distinguished with several notable features:
- **Bidirectional Reasoning**: Allowing global refinement throughout generations with bidirectional self-attention masks.
- **Format-Free Reasoning**: No strict rule on grammatical correctness amid its intermediate steps of thought.
- **Nonlinear Generation**: Generating tokens at various positions in different steps.

## Instructions
**LLaDOU-v0-Code** is a code-specific model trained on a subset of [KodCode-V1-SFT-R1](https://huggingface.co/datasets/KodCode/KodCode-V1-SFT-R1).
For inference codes and detailed instructions, please refer our github page: [maple-research-lab/LLaDOU](https://github.com/maple-research-lab/LLaDOU). |
kingmhd1519/speecht5_Mehdi_Final_Model | kingmhd1519 | 2025-05-28T04:22:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-28T04:01:40Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_Mehdi_Final_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_Mehdi_Final_Model
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5191 | 1.3189 | 100 | 0.4804 |
| 0.4849 | 2.6379 | 200 | 0.4396 |
| 0.4628 | 3.9568 | 300 | 0.4248 |
| 0.4324 | 5.2658 | 400 | 0.4125 |
| 0.4395 | 6.5847 | 500 | 0.4061 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
wutZYH/fortunetelling | wutZYH | 2025-05-28T04:16:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T04:16:06Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HPLT/hplt2c_fra10-tur90_checkpoints | HPLT | 2025-05-28T04:14:04Z | 0 | 0 | null | [
"pytorch",
"llama",
"HPLT",
"decoder",
"fr",
"tr",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
] | null | 2025-05-26T08:49:52Z | ---
language:
- fr
- tr
tags:
- HPLT
- decoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
---
# HPLT v2.0 - Cleaned - French (10%), Turkish (90%)
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the decoder-only language models trained on [HPLT2.0_cleaned](https://huggingface.co/datasets/HPLT/HPLT2.0_cleaned).
All the HPLT decoder-only models use the same hyper-parameters, roughly following the llama architecture with 2.15B parameters in total:
- hidden size: 2048
- attention heads: 32
- layers: 24
- sequence length: 2048
## Intermediate checkpoints
We are releasing intermediate checkpoints for each model at intervals of every 1000 training steps in separate branches. The naming convention is `checkpoint_00xxxx00`: for example, `checkpoint_0005000`. The checkpoints range from checkpoint_0001000 to checkpoint_0047684 and the latter is in the main branch.
## Cite us
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
``` |
mrkmja/DavidArch | mrkmja | 2025-05-28T04:12:16Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-12-23T21:48:56Z | ---
language:
- en
---
<img src="https://img.weights.com/insecure/size:640:0/resizing_type:fill/sharpen:0.5/plain/https://assets.weights.com/cm51k8egk00bhl218q3dnz6p7/0b8de5a54504111d751a52b9dc34225f.webp" style="width: 500px" />
# David Archuleta (2020s)
- **Model/dataset by:** MRKMJA
- **Epochs:** 750
- RVC v2, RMVPE, bs 14, original pretrain
- David Archuleta model trained on various songs recorded from 2020-2021. Also included a few songs from his 2018 Christmas album for a more versatile dataset. Credit (@MRKMJA) is always nice. |
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_7 | CodeAtCMU | 2025-05-28T04:12:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:10:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
peskova/donut-lora-r8-exp | peskova | 2025-05-28T04:11:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T04:11:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/vanimix-v15-sdxl | John6666 | 2025-05-28T04:10:54Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"semi-realistic",
"realistic",
"body",
"darker",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-28T04:05:04Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- semi-realistic
- realistic
- body
- darker
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1448271?modelVersionId=1839489).
This model created by [Vanilla0](https://civitai.com/user/Vanilla0).
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_TypeScript_data_12K | CodeAtCMU | 2025-05-28T04:10:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:09:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.1-lr1e-7 | AmberYifan | 2025-05-28T04:09:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T02:43:29Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-gpt4o-beta0.1-lr1e-7
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-beta0.1-lr1e-7
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.1-lr1e-7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/w0rfbm2y)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Kuncoroahmadt/Kunc | Kuncoroahmadt | 2025-05-28T04:08:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T04:08:30Z | ---
license: apache-2.0
---
|
CodeAtCMU/Qwen3-1.7B-Base_full_sft_Go_data_12K | CodeAtCMU | 2025-05-28T04:07:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:06:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Llama-3.1-8B-sft-peers-pool-IPO | AmberYifan | 2025-05-28T04:06:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T03:46:45Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-peers-pool-IPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-peers-pool-IPO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-peers-pool-IPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/5ojwwghn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
CodeAtCMU/Qwen3-1.7B-Base_full_sft_natural_language_data_shard_0 | CodeAtCMU | 2025-05-28T04:05:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T04:03:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrkmja/Zayn | mrkmja | 2025-05-28T04:04:48Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-12-06T10:23:22Z | ---
language:
- en
---
<img src="https://img.weights.com/insecure/size:640:0/resizing_type:fill/sharpen:0.5/plain/https://assets.weights.com/clz5z8anz0ekrlo6zxazv3zd8/2c95301efb7648948a9d6c7c58e7e68d.jpg" style="width: 500px" />
# Zayn Malik (Late 2010s)
- **Model/dataset by:** MRKMJA
- **Epochs:** 1000
- RVC v2, RMVPE, bs 6, original pretrain
- Zayn model trained on 12 minutes of acapellas from *Mind of Mine* (2016) and *Nobody Is Listening* (2021). Also includes vocals from "Dusk Till Dawn" (2017). Credit (@MRKMJA) is always appreciated. 🙂 |
mrkmja/SummerWalker21 | mrkmja | 2025-05-28T04:04:10Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2025-01-03T07:14:30Z | ---
language:
- en
---
<img src="https://img.weights.com/insecure/size:640:0/resizing_type:fill/sharpen:0.5/plain/https://assets.weights.com/cm5gf8z48zb24mu18ccbhi9kn/74ddd7544f54aaa798dcef90b31ac5e9.webp" style="width: 500px" />
# Summer Walker (Still Over It) (2021)
- **Model/dataset by:** MRKMJA
- **Epochs:** 450
- RVC v2, RMVPE, bs 5, original pretrain
- Summer Walker model trained on 8 minutes of vocals from *Still Over It* (2021). Please credit @MRKMJA if using. :) |
gowribharat/book-qwen3-1.7b-finetuned | gowribharat | 2025-05-28T04:02:32Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-27T20:56:50Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen3-1.7B
tags:
- generated_from_trainer
model-index:
- name: book-qwen3-1.7b-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# book-qwen3-1.7b-finetuned
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4934 | 1.0 | 56 | 2.6620 |
| 2.4392 | 2.0 | 112 | 2.6704 |
| 2.4736 | 3.0 | 168 | 2.6762 |
| 2.4025 | 4.0 | 224 | 2.6884 |
| 2.3417 | 5.0 | 280 | 2.7035 |
| 2.3312 | 6.0 | 336 | 2.7211 |
| 2.2712 | 7.0 | 392 | 2.7384 |
| 2.3361 | 8.0 | 448 | 2.7507 |
| 2.2414 | 9.0 | 504 | 2.7666 |
| 2.1561 | 10.0 | 560 | 2.7757 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.13.0
- Tokenizers 0.21.1 |
Hastagaras/q4b-run-27-test | Hastagaras | 2025-05-28T04:01:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T03:58:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/s1K-7B-cogRL-v0.1-4-GGUF | mradermacher | 2025-05-28T04:00:06Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:PeterLauLukCh/s1K-7B-cogRL-v0.1-4",
"base_model:quantized:PeterLauLukCh/s1K-7B-cogRL-v0.1-4",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T03:07:31Z | ---
base_model: PeterLauLukCh/s1K-7B-cogRL-v0.1-4
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PeterLauLukCh/s1K-7B-cogRL-v0.1-4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/s1K-7B-cogRL-v0.1-4-GGUF/resolve/main/s1K-7B-cogRL-v0.1-4.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
John6666/luminarqmix-v7-noobaixl-illustriousxl-anime-style-merge-model-v70-epred-mature-sdxl | John6666 | 2025-05-28T03:59:45Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"hands",
"human body",
"flatter shading",
"mature",
"merge",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:cyberdelia/CyberIllustrious",
"base_model:merge:cyberdelia/CyberIllustrious",
"base_model:hybskgks28275/LuminarQMix",
"base_model:merge:hybskgks28275/LuminarQMix",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-28T03:54:34Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- hands
- human body
- flatter shading
- mature
- merge
- Illustrious XL v2.0
- illustrious
base_model:
- hybskgks28275/LuminarQMix
- cyberdelia/CyberIllustrious
- OnomaAIResearch/Illustrious-XL-v2.0
---
Original model is [here](https://huggingface.co/hybskgks28275/LuminarQMix) and on [Civitai](https://civitai.com/models/1616309?modelVersionId=1839482).
The author is [here](https://huggingface.co/hybskgks28275)
This model created by [hybskgks28275](https://civitai.com/user/hybskgks28275).
|
gaianet/SmolVLM2-2.2B-Instruct-GGUF | gaianet | 2025-05-28T03:54:53Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"smolvlm",
"image-text-to-text",
"video-text-to-text",
"en",
"base_model:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-27T06:20:40Z | ---
base_model: HuggingFaceTB/SmolVLM2-2.2B-Instruct
library_name: transformers
license: apache-2.0
model_creator: HuggingFaceTB
model_name: SmolVLM2-2.2B-Instruct
quantized_by: Second State Inc.
pipeline_tag: image-text-to-text
tags:
- video-text-to-text
language:
- en
---
# SmolVLM2-2.2B-Instruct-GGUF
## Original Model
[HuggingFaceTB/SmolVLM2-2.2B-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct)
## Run with Gaianet
**Prompt template:**
prompt template: `smol-vision`
**Context size:**
chat_ctx_size: `2048`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b5501*
|
One-Girl-One-Wolf-Viral-Videos/One.Girl.One.Wolf.Viral.Video.Leaked.Video.Viral.On.Social.Media.Original.Video.2025 | One-Girl-One-Wolf-Viral-Videos | 2025-05-28T03:54:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T03:54:31Z | 18 seconds ago
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=xxx"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
MinaMila/llama_instbase_3b_LoRa_Adult_ep6_22 | MinaMila | 2025-05-28T03:51:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T03:50:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Linggowiktiks/Liyama-3B | Linggowiktiks | 2025-05-28T03:49:06Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"tl",
"dataset:Linggowiktiks/AnoNa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T03:38:57Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
- tl
datasets:
- Linggowiktiks/AnoNa
---
# 🦙 Liyama-3B
**Liyama-3B** is a fine-tuned version of Meta’s LLaMA-3B (3.2) model, built to understand and respond fluently in **Tagalog**. It was trained on the **AnoNa** dataset over **3 epochs**, aiming for natural, context-aware instruction-following in Filipino.
---
## 🔤 Origin of the Name
The name **Liyama** is a Tagalified version of *llama*, reflecting both its LLaMA base and its Tagalog-focused language capabilities. It mirrors how Filipino often adapts foreign terms into familiar, phonetic forms—like *camera → kamera*, *lion → leon*, and now, *llama → liyama*.
---
## 🧠 Training Data: The AnoNa Dataset
Liyama-3B was trained solely on **response completions** from the **AnoNa** dataset — a self-instruct corpus generated using **Gemini 1.5** and **2.0**.
Inspired by **SimpleQnA**, the dataset contains short, helpful instruction-response pairs. But **AnoNa** introduces several improvements:
- ✅ **Less English, More Tagalog** prompts
- ✅ **Less IFEVAL-style formatting**
- ✅ **No overuse of modifiers** in instructions
- ✅ **Balanced task types** to avoid dominant categories
- ✅ **Complex tasks favored** (65% complex / 35% simple)
- ✅ **Reduced sycophancy** and generic praise
- ✅ **Improved follow-up handling**
- ✅ **AI self-intro appears only when relevant**
- ✅ **Implicit chain-of-thought reasoning**, not labeled
- ✅ **Extra task types** added to increase variety
This focus creates a model that's practical, straightforward, and tuned for **realistic conversational use in Filipino**, without excessive formatting or irrelevant disclaimers.
---
## 🗣️ Use Case
Liyama-3B is ideal for:
- Answering questions in Tagalog
- Writing essays, reflections, and letters in Filipino
- Following natural instructions, even when mixed with English
- Chat-based tasks where fluency and tone matter
- Educational or community apps centered around local language use
---
## 📦 Model Details
| Feature | Value |
|--------------------|----------------------------|
| Base Model | LLaMA-3B v3.2 |
| Fine-tuned Dataset | AnoNa |
| Epochs | 3 |
| Language Focus | Tagalog (with some English)|
| Prompt Format | Responses only |
---
Liyama-3B is part of a broader effort to create open, practical Filipino-language models for real use—not just benchmarks. Expect follow-ups tuned for multi-turn chat, reasoning, and creative tasks. |
John6666/eri-stellartint-v10-illustrious-sdxl | John6666 | 2025-05-28T03:49:00Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"sci-fi",
"fantasy",
"expressive portraits with a luminous, elegant, and subtly surreal finish",
"softness with vivid detail",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-05-28T03:43:45Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- sci-fi
- fantasy
- expressive portraits with a luminous, elegant, and subtly surreal finish
- softness with vivid detail
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1626013/eri-stellartint?modelVersionId=1840343).
This model created by [Makoto_Eri](https://civitai.com/user/Makoto_Eri).
|
Aluba/zombie2505_39 | Aluba | 2025-05-28T03:47:55Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-28T03:24:23Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Aluba/zombie2505_36 | Aluba | 2025-05-28T03:47:13Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-28T03:24:10Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DngBack/SunDomain_v2_250528 | DngBack | 2025-05-28T03:46:42Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T03:30:29Z | ---
license: apache-2.0
---
|
Video-De-do-gotinha-no-iml-portal-zacarias/Full.video.do.gotinha.no.iml.portal.zacarias.foto.do.gotinha.no.iml.morto | Video-De-do-gotinha-no-iml-portal-zacarias | 2025-05-28T03:45:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T03:45:14Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
noza-kit/DPO-llmbook-based-synthetic-data-adapter | noza-kit | 2025-05-28T03:44:50Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-02-20T15:22:53Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TharindaNirmal/nilkshax | TharindaNirmal | 2025-05-28T03:44:10Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T03:44:03Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: nilkshaX
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# nilkshaX
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `nilkshaX` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
andresparodi/andrestestmodel | andresparodi | 2025-05-28T03:43:05Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-28T03:12:50Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: andres
---
# Andrestestmodel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `andres` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "andres",
"lora_weights": "https://huggingface.co/andresparodi/andrestestmodel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('andresparodi/andrestestmodel', weight_name='lora.safetensors')
image = pipeline('andres').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/andresparodi/andrestestmodel/discussions) to add images that show off what you’ve made with this LoRA.
|
inhacoms/gmr_qlora_v1.1 | inhacoms | 2025-05-28T03:42:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T03:37:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Keltezaa/Sabrina | Keltezaa | 2025-05-28T03:40:46Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-to-image | 2025-05-28T03:40:38Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Sabrina.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Sabrina
license: cc-by-nc-nd-4.0
---
# Sabrina
<Gallery />
## Model description
Sabrina
## Trigger words
You should use `Sabrina` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/Sabrina/tree/main) them in the Files & versions tab.
|
zfdev/squad_v2-16bit-Qwen2.5-1.5B-Instruct | zfdev | 2025-05-28T03:40:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-28T03:37:26Z | ---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** zfdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Keltezaa/beth | Keltezaa | 2025-05-28T03:39:18Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-to-image | 2025-05-28T03:39:12Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/beth.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: beth
license: cc-by-nc-nd-4.0
---
# beth
<Gallery />
## Model description
beth
## Trigger words
You should use `beth` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/beth/tree/main) them in the Files & versions tab.
|
Keltezaa/Alice | Keltezaa | 2025-05-28T03:36:46Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-to-image | 2025-05-28T03:36:39Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Alice_Flux_e000010_01_2025051415.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Alice
license: cc-by-nc-nd-4.0
---
# Alice
<Gallery />
## Model description
Alice
## Trigger words
You should use `Alice` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/Alice/tree/main) them in the Files & versions tab.
|
mradermacher/Anole-7b-GGUF | mradermacher | 2025-05-28T03:33:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:GAIR/Anole-7b",
"base_model:quantized:GAIR/Anole-7b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-05-27T17:12:55Z | ---
base_model: GAIR/Anole-7b
language:
- en
library_name: transformers
license: other
license_link: https://ai.meta.com/resources/models-and-libraries/chameleon-license/
license_name: chameleon-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/GAIR/Anole-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Anole-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q4_K_M.gguf) | Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q5_K_S.gguf) | Q5_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q5_K_M.gguf) | Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q6_K.gguf) | Q6_K | 5.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.Q8_0.gguf) | Q8_0 | 7.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-GGUF/resolve/main/Anole-7b.f16.gguf) | f16 | 14.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Anole-7b-i1-GGUF | mradermacher | 2025-05-28T03:33:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:GAIR/Anole-7b",
"base_model:quantized:GAIR/Anole-7b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-05-27T20:49:51Z | ---
base_model: GAIR/Anole-7b
language:
- en
library_name: transformers
license: other
license_link: https://ai.meta.com/resources/models-and-libraries/chameleon-license/
license_name: chameleon-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/GAIR/Anole-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Anole-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q4_1.gguf) | i1-Q4_1 | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Anole-7b-i1-GGUF/resolve/main/Anole-7b.i1-Q6_K.gguf) | i1-Q6_K | 5.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Runyi-Hu/MaskMark | Runyi-Hu | 2025-05-28T03:32:02Z | 0 | 1 | null | [
"arxiv:2504.12739",
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T08:08:56Z | ---
license: apache-2.0
---
Model weights for [Mask Image Watermarking](http://arxiv.org/abs/2504.12739).
Code: https://github.com/hurunyi/MaskMark. |
thevan2404/coteT5-phase2-denoise | thevan2404 | 2025-05-28T03:27:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:BienKieu/codeT5-phase1-version2",
"base_model:finetune:BienKieu/codeT5-phase1-version2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-27T17:09:47Z | ---
library_name: transformers
license: apache-2.0
base_model: BienKieu/codeT5-phase1-version2
tags:
- generated_from_trainer
model-index:
- name: coteT5-phase2-denoise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# coteT5-phase2-denoise
This model is a fine-tuned version of [BienKieu/codeT5-phase1-version2](https://huggingface.co/BienKieu/codeT5-phase1-version2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 14
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/Table-R1-Zero-7B-GGUF | mradermacher | 2025-05-28T03:26:44Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Raywithyou/Table-R1-Zero-7B",
"base_model:quantized:Raywithyou/Table-R1-Zero-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-28T02:38:54Z | ---
base_model: Raywithyou/Table-R1-Zero-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Raywithyou/Table-R1-Zero-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-7B-GGUF/resolve/main/Table-R1-Zero-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jamvess/week_4_model_14b | Jamvess | 2025-05-28T03:23:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T03:23:29Z | ---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jamvess
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fangxq/XYZ-embedding | fangxq | 2025-05-28T03:23:41Z | 0 | 0 | null | [
"pytorch",
"bert",
"mteb",
"zh",
"model-index",
"region:us"
] | null | 2025-05-27T07:51:19Z | ---
model-index:
- name: XYZ-embedding
results:
- dataset:
config: default
name: MTEB CmedqaRetrieval
revision: None
split: dev
type: C-MTEB/CmedqaRetrieval
metrics:
- type: map_at_1
value: 27.796
- type: map_at_10
value: 41.498000000000005
- type: map_at_100
value: 43.332
- type: map_at_1000
value: 43.429
- type: map_at_3
value: 37.172
- type: map_at_5
value: 39.617000000000004
- type: mrr_at_1
value: 42.111
- type: mrr_at_10
value: 50.726000000000006
- type: mrr_at_100
value: 51.632
- type: mrr_at_1000
value: 51.67
- type: mrr_at_3
value: 48.429
- type: mrr_at_5
value: 49.662
- type: ndcg_at_1
value: 42.111
- type: ndcg_at_10
value: 48.294
- type: ndcg_at_100
value: 55.135999999999996
- type: ndcg_at_1000
value: 56.818000000000005
- type: ndcg_at_3
value: 43.185
- type: ndcg_at_5
value: 45.266
- type: precision_at_1
value: 42.111
- type: precision_at_10
value: 10.635
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 24.539
- type: precision_at_5
value: 17.644000000000002
- type: recall_at_1
value: 27.796
- type: recall_at_10
value: 59.034
- type: recall_at_100
value: 86.991
- type: recall_at_1000
value: 98.304
- type: recall_at_3
value: 43.356
- type: recall_at_5
value: 49.998
- type: main_score
value: 48.294
task:
type: Retrieval
- dataset:
config: default
name: MTEB CovidRetrieval
revision: None
split: dev
type: C-MTEB/CovidRetrieval
metrics:
- type: map_at_1
value: 80.479
- type: map_at_10
value: 87.984
- type: map_at_100
value: 88.036
- type: map_at_1000
value: 88.03699999999999
- type: map_at_3
value: 87.083
- type: map_at_5
value: 87.694
- type: mrr_at_1
value: 80.927
- type: mrr_at_10
value: 88.046
- type: mrr_at_100
value: 88.099
- type: mrr_at_1000
value: 88.1
- type: mrr_at_3
value: 87.215
- type: mrr_at_5
value: 87.768
- type: ndcg_at_1
value: 80.927
- type: ndcg_at_10
value: 90.756
- type: ndcg_at_100
value: 90.96
- type: ndcg_at_1000
value: 90.975
- type: ndcg_at_3
value: 89.032
- type: ndcg_at_5
value: 90.106
- type: precision_at_1
value: 80.927
- type: precision_at_10
value: 10.011000000000001
- type: precision_at_100
value: 1.009
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 31.752999999999997
- type: precision_at_5
value: 19.6
- type: recall_at_1
value: 80.479
- type: recall_at_10
value: 99.05199999999999
- type: recall_at_100
value: 99.895
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 94.494
- type: recall_at_5
value: 97.102
- type: main_score
value: 90.756
task:
type: Retrieval
- dataset:
config: default
name: MTEB DuRetrieval
revision: None
split: dev
type: C-MTEB/DuRetrieval
metrics:
- type: map_at_1
value: 27.853
- type: map_at_10
value: 85.13199999999999
- type: map_at_100
value: 87.688
- type: map_at_1000
value: 87.712
- type: map_at_3
value: 59.705
- type: map_at_5
value: 75.139
- type: mrr_at_1
value: 93.65
- type: mrr_at_10
value: 95.682
- type: mrr_at_100
value: 95.722
- type: mrr_at_1000
value: 95.724
- type: mrr_at_3
value: 95.467
- type: mrr_at_5
value: 95.612
- type: ndcg_at_1
value: 93.65
- type: ndcg_at_10
value: 91.155
- type: ndcg_at_100
value: 93.183
- type: ndcg_at_1000
value: 93.38499999999999
- type: ndcg_at_3
value: 90.648
- type: ndcg_at_5
value: 89.47699999999999
- type: precision_at_1
value: 93.65
- type: precision_at_10
value: 43.11
- type: precision_at_100
value: 4.854
- type: precision_at_1000
value: 0.49100000000000005
- type: precision_at_3
value: 81.11699999999999
- type: precision_at_5
value: 68.28999999999999
- type: recall_at_1
value: 27.853
- type: recall_at_10
value: 91.678
- type: recall_at_100
value: 98.553
- type: recall_at_1000
value: 99.553
- type: recall_at_3
value: 61.381
- type: recall_at_5
value: 78.605
- type: main_score
value: 91.155
task:
type: Retrieval
- dataset:
config: default
name: MTEB EcomRetrieval
revision: None
split: dev
type: C-MTEB/EcomRetrieval
metrics:
- type: map_at_1
value: 54.50000000000001
- type: map_at_10
value: 65.167
- type: map_at_100
value: 65.664
- type: map_at_1000
value: 65.67399999999999
- type: map_at_3
value: 62.633
- type: map_at_5
value: 64.208
- type: mrr_at_1
value: 54.50000000000001
- type: mrr_at_10
value: 65.167
- type: mrr_at_100
value: 65.664
- type: mrr_at_1000
value: 65.67399999999999
- type: mrr_at_3
value: 62.633
- type: mrr_at_5
value: 64.208
- type: ndcg_at_1
value: 54.50000000000001
- type: ndcg_at_10
value: 70.294
- type: ndcg_at_100
value: 72.564
- type: ndcg_at_1000
value: 72.841
- type: ndcg_at_3
value: 65.128
- type: ndcg_at_5
value: 67.96799999999999
- type: precision_at_1
value: 54.50000000000001
- type: precision_at_10
value: 8.64
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 24.099999999999998
- type: precision_at_5
value: 15.840000000000002
- type: recall_at_1
value: 54.50000000000001
- type: recall_at_10
value: 86.4
- type: recall_at_100
value: 96.7
- type: recall_at_1000
value: 98.9
- type: recall_at_3
value: 72.3
- type: recall_at_5
value: 79.2
- type: main_score
value: 70.294
task:
type: Retrieval
- dataset:
config: default
name: MTEB MMarcoRetrieval
revision: None
split: dev
type: C-MTEB/MMarcoRetrieval
metrics:
- type: map_at_1
value: 69.401
- type: map_at_10
value: 78.8
- type: map_at_100
value: 79.077
- type: map_at_1000
value: 79.081
- type: map_at_3
value: 76.97
- type: map_at_5
value: 78.185
- type: mrr_at_1
value: 71.719
- type: mrr_at_10
value: 79.327
- type: mrr_at_100
value: 79.56400000000001
- type: mrr_at_1000
value: 79.56800000000001
- type: mrr_at_3
value: 77.736
- type: mrr_at_5
value: 78.782
- type: ndcg_at_1
value: 71.719
- type: ndcg_at_10
value: 82.505
- type: ndcg_at_100
value: 83.673
- type: ndcg_at_1000
value: 83.786
- type: ndcg_at_3
value: 79.07600000000001
- type: ndcg_at_5
value: 81.122
- type: precision_at_1
value: 71.719
- type: precision_at_10
value: 9.924
- type: precision_at_100
value: 1.049
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.742
- type: precision_at_5
value: 18.937
- type: recall_at_1
value: 69.401
- type: recall_at_10
value: 93.349
- type: recall_at_100
value: 98.492
- type: recall_at_1000
value: 99.384
- type: recall_at_3
value: 84.385
- type: recall_at_5
value: 89.237
- type: main_score
value: 82.505
task:
type: Retrieval
- dataset:
config: default
name: MTEB MedicalRetrieval
revision: None
split: dev
type: C-MTEB/MedicalRetrieval
metrics:
- type: map_at_1
value: 57.8
- type: map_at_10
value: 64.696
- type: map_at_100
value: 65.294
- type: map_at_1000
value: 65.328
- type: map_at_3
value: 62.949999999999996
- type: map_at_5
value: 64.095
- type: mrr_at_1
value: 58.099999999999994
- type: mrr_at_10
value: 64.85
- type: mrr_at_100
value: 65.448
- type: mrr_at_1000
value: 65.482
- type: mrr_at_3
value: 63.1
- type: mrr_at_5
value: 64.23
- type: ndcg_at_1
value: 57.8
- type: ndcg_at_10
value: 68.041
- type: ndcg_at_100
value: 71.074
- type: ndcg_at_1000
value: 71.919
- type: ndcg_at_3
value: 64.584
- type: ndcg_at_5
value: 66.625
- type: precision_at_1
value: 57.8
- type: precision_at_10
value: 7.85
- type: precision_at_100
value: 0.9289999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 23.1
- type: precision_at_5
value: 14.84
- type: recall_at_1
value: 57.8
- type: recall_at_10
value: 78.5
- type: recall_at_100
value: 92.9
- type: recall_at_1000
value: 99.4
- type: recall_at_3
value: 69.3
- type: recall_at_5
value: 74.2
- type: main_score
value: 68.041
task:
type: Retrieval
- dataset:
config: default
name: MTEB T2Retrieval
revision: None
split: dev
type: C-MTEB/T2Retrieval
metrics:
- type: map_at_1
value: 28.041
- type: map_at_10
value: 78.509
- type: map_at_100
value: 82.083
- type: map_at_1000
value: 82.143
- type: map_at_3
value: 55.345
- type: map_at_5
value: 67.899
- type: mrr_at_1
value: 90.86
- type: mrr_at_10
value: 93.31
- type: mrr_at_100
value: 93.388
- type: mrr_at_1000
value: 93.391
- type: mrr_at_3
value: 92.92200000000001
- type: mrr_at_5
value: 93.167
- type: ndcg_at_1
value: 90.86
- type: ndcg_at_10
value: 85.875
- type: ndcg_at_100
value: 89.269
- type: ndcg_at_1000
value: 89.827
- type: ndcg_at_3
value: 87.254
- type: ndcg_at_5
value: 85.855
- type: precision_at_1
value: 90.86
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.029
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 76.172
- type: precision_at_5
value: 63.759
- type: recall_at_1
value: 28.041
- type: recall_at_10
value: 84.829
- type: recall_at_100
value: 95.89999999999999
- type: recall_at_1000
value: 98.665
- type: recall_at_3
value: 57.009
- type: recall_at_5
value: 71.188
- type: main_score
value: 85.875
task:
type: Retrieval
- dataset:
config: default
name: MTEB VideoRetrieval
revision: None
split: dev
type: C-MTEB/VideoRetrieval
metrics:
- type: map_at_1
value: 67.30000000000001
- type: map_at_10
value: 76.819
- type: map_at_100
value: 77.141
- type: map_at_1000
value: 77.142
- type: map_at_3
value: 75.233
- type: map_at_5
value: 76.163
- type: mrr_at_1
value: 67.30000000000001
- type: mrr_at_10
value: 76.819
- type: mrr_at_100
value: 77.141
- type: mrr_at_1000
value: 77.142
- type: mrr_at_3
value: 75.233
- type: mrr_at_5
value: 76.163
- type: ndcg_at_1
value: 67.30000000000001
- type: ndcg_at_10
value: 80.93599999999999
- type: ndcg_at_100
value: 82.311
- type: ndcg_at_1000
value: 82.349
- type: ndcg_at_3
value: 77.724
- type: ndcg_at_5
value: 79.406
- type: precision_at_1
value: 67.30000000000001
- type: precision_at_10
value: 9.36
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 28.299999999999997
- type: precision_at_5
value: 17.8
- type: recall_at_1
value: 67.30000000000001
- type: recall_at_10
value: 93.60000000000001
- type: recall_at_100
value: 99.6
- type: recall_at_1000
value: 99.9
- type: recall_at_3
value: 84.89999999999999
- type: recall_at_5
value: 89.0
- type: main_score
value: 80.93599999999999
task:
type: Retrieval
tags:
- mteb
language:
- zh
---
<h2 align="left">XYZ-embedding</h2>
## Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("fangxq/XYZ-embedding")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1792]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
``` |
wagens/Llama-3.1-8B-Instruct-Q4_K_M-GGUF | wagens | 2025-05-28T03:23:38Z | 0 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-28T03:23:13Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# wagens/Llama-3.1-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo wagens/Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wagens/Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-8b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo wagens/Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo wagens/Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3.1-8b-instruct-q4_k_m.gguf -c 2048
```
|
picard47at/punctuation_512_0.6B | picard47at | 2025-05-28T03:22:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T02:31:14Z | ---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
library_name: transformers
model_name: punctuation_512
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for punctuation_512
This model is a fine-tuned version of [unsloth/qwen3-1.7b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-1.7b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="picard47at/punctuation_512", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/picardtseng-pesi/punctuation/runs/gghra5tk)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Bmingg/qwen2.5-0.5B-Instruct-DPO-5000-3epochs | Bmingg | 2025-05-28T03:22:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T03:22:12Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cloudmaster/Llama-3.2-3B-HF-hqq-00 | Cloudmaster | 2025-05-28T03:22:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"hqq",
"region:us"
] | text-generation | 2025-05-28T03:20:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jjaehyeok2/14_5_23 | jjaehyeok2 | 2025-05-28T03:21:35Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:Bingsu/my-korean-stable-diffusion-v1-5",
"base_model:adapter:Bingsu/my-korean-stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-05-27T17:24:09Z | ---
base_model: Bingsu/my-korean-stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - jjaehyeok2/14_5_23
These are LoRA adaption weights for Bingsu/my-korean-stable-diffusion-v1-5. The weights were fine-tuned on the jjaehyeok2/14_5_23 dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
IDEA-Emdoor/UniTTS-mixed-v0.1 | IDEA-Emdoor | 2025-05-28T03:20:51Z | 1 | 0 | null | [
"pytorch",
"safetensors",
"qwen2",
"arxiv:2505.17426",
"region:us"
] | null | 2025-04-29T06:02:39Z | ---
{}
---
## UniTTS
### Overview
we introduce UniTTS and [DistilCodec](https://github.com/IDEA-Emdoor-Lab/DistilCodec) . DistilCodec is a single-codebook audio codec, which has 32768 codes, and the utilization of the codebook achieves nearly 100\%. UniTTS leverages DistilCodec for audio discretization, while its backbone network adopts Qwen2.5-7B to model relationships between audio tokens.
Our main contributions are summarized as follows:
- DistilCodec: We propose a training methodology that enables the distillation of multi-codebook Neural Audio Codecs(NAC) into single-codebook NAC. Through this approach, we have developed DistilCodec - a single-codebook NAC containing 32,768 codes that achieves 100\% utilization with balanced code distribution. Notably, DistilCodec employs universal audio data for training rather than being restricted to speech-specific datasets.
- UniTTS: We present UniTTS, a novel TTS system trained on QWen2.5-7B and DistilCodec. Leveraging DistilCodec's comprehensive audio modeling capabilities, UniTTS achieves end-to-end speech synthesis with full-spectrum audio input/output. The system demonstrates enhanced naturalness in emotional expressiveness compared to conventional TTS systems, particularly in capturing subtle prosodic variations and affective nuances during audio generation.
- Novel Audio Language Model Paradigm: We establish a dual-phase Audio Language Model (ALM) training framework, which comprises (i) Audio Perceptual Modeling (DistilCodec) focusing purely on acoustic discretization, and (ii) Audio Cognitive Modeling (UniTTS) implemented via pretraining (incorporating universal audio autoregressive tasks), supervised fine-tuning (evaluating text-audio interleaved prompts' impact), and alignment (employing direct preference optimization for speech refinement) - enabled by UniTTS's complete end-to-end integration within the LLM.
## Training data distribution and application scope
The model architecture was augmented with cross-lingual text-speech paired datasets (English and Chinese) alongside text-associated instruction corpora during pretraining. Subsequent SFT and alignment phases systematically incorporated three datasets: text instructions dataset, long-CoT dataset, and Chinese TTS dataset. Consequently, the model demonstrates robust capabilities in text-based conversational, long-CoT conversational, and Chinese TTS.
The distribution of the pretraining training data is as follows:
| Data Type | Data Size (B) |
|----------------------------|---------------|
| Text Data | 140 |
| Text-Audio Alignment Data | 82 |
| Audio Data | 100 |
| **Total** | **322** |
The distribution of the sft training data is as follows:
| Data Type | Number of Samples |
|----------------------------|-------------------|
| Text Data | 181K |
| Long-cot Dataset | 55K |
| Chinese Text-Audio Alignment Data | 401K |
| Total | 637K |
The distribution of the lpo training data is as follows:
| Data Type | Number of Samples |
|----------------------------|-------------------|
| General SFT Data | 100K |
| Long-cot Dataset | 45K |
| Chinese Text-Audio Alignment Data | 300K |
| Total | 445K |
The proposed model supports the following capabilities
| Application Type | Support Status |
|----------------------------|-------------------|
| Text conversation | Supported |
| Long-cot conversation | Supported |
| Chinese TTS | Supported |
## Install
**Clone and Install**
- Clone the repo
``` sh
git clone [email protected]:IDEA-Emdoor-Lab/UniTTS.git
git clone [email protected]:IDEA-Emdoor-Lab/DistilCodec.git
cd UniTTS
```
- Installation environment
``` sh
conda create -n unitts -y python=3.10
conda activate unitts
pip install -r requirements.txt
```
**Model Download**
Download via git clone:
```sh
mkdir -p pretrained_models
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# clone UniTTS model
git clone [email protected]:IDEA-Emdoor/UniTTS-mixed-v0.1
```
## Inference Usage
### TTS Inference Usage
```
#### Step 1: Init model
from cli.tokenizer import QWenTokenizer
from cli.tts_tool import enocde_audio, tts_prompt_ref_text
import soundfile as sf
import librosa
from vllm import LLM, SamplingParams
import sys
sys.path.append('../DistilCodec/') # set DistilCodec code path
from distil_codec import DistilCodec # type: ignore
#init model
model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
model_config="IDEA-Emdoor/UniTTS-mixed-v0.1/codec_config.json"
ckpt_config="IDEA-Emdoor/UniTTS-mixed-v0.1"
ref_audio_path='cli/ref.mp3'
ref_text='求求你,再给我一次机会,我保证不会让你失望……'
infer_text='天啊!这竟然是真的?我简直不敢相信!'
llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
codec = DistilCodec.from_pretrained(
config_path=model_config,
model_path=ckpt_config,
use_generator=True,
is_debug=False,
local_rank=0).eval()
tokenizer: QWenTokenizer = QWenTokenizer(model_name)
stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
#### Step 2: format prompt
ref_audio_text = enocde_audio(codec, tokenizer, ref_audio_path)
ref_audio_text = f'<|inter_audio_begin|>{ref_audio_text}<|inter_audio_end|>'
prompt = tts_prompt_ref_text.format(content=infer_text, example_voice=ref_audio_text, example_text=ref_text)
#### Step 3: inference speech token
sampling_params = SamplingParams(temperature=0.9, top_p=0.9, stop_token_ids=stop_ids, max_tokens=6000)
output = llm.generate([prompt], sampling_params)
#### step 4: decode speech token
output_dir='./' # save path
tokens = tokenizer.tokenizer.encode(output[0].outputs[0].text)[1: -2]
utt = 'infer'
y_gen = codec.decode_from_codes(
tokens,
minus_token_offset=True # if the 'plus_llm_offset' of method demo_for_generate_audio_codes is set to True, then minus_token_offset must be True.
)
codec.save_wav(
audio_gen_batch=y_gen,
nhop_lengths=[y_gen.shape[-1]],
save_path=output_dir,
name_tag=utt
)
```
### Long-cot Inference Usage
```
#### Step 1: Init model
from cli.tokenizer import QWenTokenizer
from cli.tts_tool import enocde_audio, long_cot_prompt_template
from vllm import LLM, SamplingParams
#init model
model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
infer_text="给我写一首春天的作文"
llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
tokenizer: QWenTokenizer = QWenTokenizer(model_name)
stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
#### Step 2: format prompt
prompt = long_cot_prompt_template.format(question=infer_text)
#### Step 3: inference speech token
sampling_params = SamplingParams(temperature=0.8, top_p=0.8, stop_token_ids=stop_ids, max_tokens=6000)
output = llm.generate([prompt], sampling_params)
print(output[0].outputs[0].text)
```
### Text conversation Inference Usage
```
#### Step 1: Init model
from cli.tokenizer import QWenTokenizer
from cli.tts_tool import enocde_audio, text_conversation_prompt_template
from vllm import LLM, SamplingParams
#init model
model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
infer_text="天空为什么是蓝色的?"
llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
tokenizer: QWenTokenizer = QWenTokenizer(model_name)
stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
#### Step 2: format prompt
prompt = text_conversation_prompt_template.format(question=infer_text)
#### Step 3: inference speech token
sampling_params = SamplingParams(temperature=0.75, top_p=0.75, stop_token_ids=stop_ids, max_tokens=6000)
output = llm.generate([prompt], sampling_params)
print(output[0].outputs[0].text)
```
## Citation
```
@misc{wang2025unittsendtoendttsdecoupling,
title={UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information},
author={Rui Wang and Qianguo Sun and Tianrong Chen and Zhiyun Zeng and Junlong Wu and Jiaxing Zhang},
year={2025},
eprint={2505.17426},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2505.17426},
}
```
## Disclaimer
Our model provides zero-shot voice cloning only for academic research purposes. We encourage the community to uphold safety and ethical principles in AI research and applications.
Important Notes:
- Compliance with the model's open-source license is mandatory.
- Unauthorized voice replication applications are strictly prohibited.
- Developers bear no responsibility for any misuse of this model.
## License
<a href="https://arxiv.org/abs/2505.17426">UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information</a> © 2025 by <a href="https://creativecommons.org">Rui Wang, Qianguo Sun, Tianrong Chen, Zhiyun Zeng, Junlong Wu, Jiaxing Zhang</a> is licensed under <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND 4.0</a><img src="https://mirrors.creativecommons.org/presskit/icons/cc.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/by.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/nc.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/nd.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"> |
DavidAU/Gemma-The-Writer-N-Restless-Quill-V2-Enhanced32-10B-Uncensored | DavidAU | 2025-05-28T03:19:02Z | 12 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-25T01:44:22Z | ---
library_name: transformers
tags:
- mergekit
- merge
- gemma2
base_model:
- google/gemma-2-9b-it
---
<h2>Gemma-The-Writer-N-Restless-Quill-V2-Enhanced32-10B-Uncensored</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
This source is in float 32 precision. If you are going to quant it as a GGUF, make sure the "--outtype f32" is set (during "convert..." step) , so the ggufs benefit from the f32 source.
NOTE: Links to GGUFs below.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 2" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/Gemma-The-Writer-N-Restless-Quill-V2-Enhanced32-10B-Uncensored-gguf ] |
DavidAU/Gemma-The-Writer-Mighty-Sword-9B | DavidAU | 2025-05-28T03:18:35Z | 26 | 4 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-12-25T01:44:01Z | ---
library_name: transformers
tags:
- mergekit
- merge
- gemma2
base_model:
- google/gemma-2-9b-it
---
<h2>Gemma-The-Writer-Mighty-Sword-9B</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
This source is in float 32 precision. If you are going to quant it as a GGUF, make sure the "--outtype f32" is set (during "convert..." step) , so the ggufs benefit from the f32 source.
NOTE: Links to GGUFs below.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/Gemma-The-Writer-Mighty-Sword-9B-gguf ] |
DavidAU/Gemma-The-Writer-J.GutenBerg-10B | DavidAU | 2025-05-28T03:17:42Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-30T06:39:06Z | ---
library_name: transformers
tags:
- mergekit
- merge
- gemma2
base_model:
- google/gemma-2-9b-it
---
<h2>Gemma-The-Writer-J.GutenBerg-10B</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 2" (settings will enhance operation / optional adjustments) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/Gemma-The-Writer-J.GutenBerg-10B-gguf ]
Additional quants:
[ https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-GGUF]
imatrix ggufs:
[ https://huggingface.co/mradermacher/Gemma-The-Writer-J.GutenBerg-10B-i1-GGUF] |
Adjhe/W | Adjhe | 2025-05-28T03:16:52Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-28T03:16:52Z | ---
license: bigscience-bloom-rail-1.0
---
|
DavidAU/Gemma-The-Writer-DEADLINE-10B | DavidAU | 2025-05-28T03:16:38Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T04:32:51Z | ---
library_name: transformers
tags:
- mergekit
- merge
- gemma2
base_model:
- google/gemma-2-9b-it
---
<h2>Gemma-The-Writer-DEADLINE-10B</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 2" (settings will enhance operation / optional adjustments) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/Gemma-The-Writer-DEADLINE-10B-gguf ]
Additional quants:
[ https://huggingface.co/mradermacher/Gemma-The-Writer-DEADLINE-10B-GGUF ]
Imatrix GGUF:
[ https://huggingface.co/mradermacher/Gemma-The-Writer-DEADLINE-10B-i1-GGUF ]
[ https://huggingface.co/RichardErkhov/DavidAU_-_Gemma-The-Writer-DEADLINE-10B-gguf ] |
DavidAU/Gemma-The-Writer-N-Restless-Quill-10B-Uncensored-GGUF | DavidAU | 2025-05-28T03:16:05Z | 16,033 | 75 | null | [
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"gemma2",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"uncensored",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"rp",
"horror",
"gemma",
"mergekit",
"text-generation",
"en",
"arxiv:2401.02415",
"base_model:DavidAU/Gemma-The-Writer-N-Restless-Quill-10B-Uncensored",
"base_model:quantized:DavidAU/Gemma-The-Writer-N-Restless-Quill-10B-Uncensored",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-25T00:19:23Z | ---
license: apache-2.0
language:
- en
base_model:
- DavidAU/Gemma-The-Writer-N-Restless-Quill-10B-Uncensored
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- gemma2
- storytelling
- fiction story
- science fiction
- romance
- all genres
- uncensored
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- horror
- gemma
- mergekit
pipeline_tag: text-generation
---
<B><font color="red"> WARNING: </font> Uncensored. Cursing, Swearing, Horror, Graphic Details </B>
Updated Dec 22 2024: Refreshed, Upgraded and New quants (augmented). Better performance for all quants (see below). And link to new float 32 remastered version.
<h3>Gemma-The-Writer-N-Restless-Quill-10B-GGUF</h3>
<img src="restless-quill.jpg" style="float:right; width:300px; height:300px; padding:10px;">
This is a Gemma2 model merge of the top FOUR storytelling / writing models as noted at EQBench, tuned specifically for fiction, story, and writing.
It also contains "gemma2-gutenberg", "Tiger-Gemma" (uncensor) and "Ellaria" (uncensor some more) to give it a unique prose style.
Examples also include use of / without a "pre-prompt" showcasing altering the prose further, including (
example #3, with 3 different temp settings, and example #2 (last gen) for limited "swearing") how
to "uncensor" the model using a strong pre prompt with a strong prompt.
Censorship level is controlled at the prompt level.
All of these factors combined with an unusually strong instruction following to create a potent creative / prose model.
Due to high stability and compressed nature of the model you can also use it for general use too, including roleplay.
<B>Important Note / CRITICAL NOTE:</B>
Please see "settings" section below for critical default parameters, samplers
and advanced samplers settings (multiple AI/LLM apps covered) to ensure maximum model performance.
This model requires GEMMA Instruct template, and has 8k context window but is extendable via rope to 32k or higher.
This version - "Restless-Quill" - is a modified version of "Gemma The Writer 9B" ( [ https://huggingface.co/DavidAU/Gemma-The-Writer-9B-GGUF ] ) and has been modified with a
Brainstorm 5x adapter (developed by DavidAU) to alter output generation.
This adds close to 1B parameters to the model raising it to 46 layers, 508 tensors to a total of 10B parameters.
The addition of Brainstorm has altered the prose, sentence structure, reduced GPTISMS, and generally improved the model's performance.
It also raises the average output length - in some cases almost doubling it.
The three added models modify the prose/style further including one and two words sentences, stronger paragraph variety, paragraphs (size and complexity),
word choices, and improve the general output of the model further.
Instruction following has also been enhanced, and with a pre-prompt(s) you can push this model even further.
"Gutenberg" has further "hammered out" GTPisms, and cliches while increasing description, detail levels and metaphor generation - but tempered
vs the "J.Gutenberg" ( [https://huggingface.co/DavidAU/Gemma-The-Writer-J.GutenBerg-10B-GGUF ] ) version of "Gemma The Writer".
Recommended Rep Pen of 1.02 or higher, temp range 0-5. (see other settings notes below)
Example prompts and outputs below, including examples using a pre prompt.
<B>QUANT Updates Dec 22 2024: Refreshed, Upgraded and New quants:</B>
- All quants have been "refreshed", quanted with the lastest LLAMACPP improvements : Better instruction following, output generation across all quants.
- All quants have also been upgraded with "more bits" for output tensor (all set at Q8_0) and embed for better performance (this is in addition to the "refresh")
- New specialized quants (in addition to the new refresh/upgrades): "max, max-cpu" (will include this in the file name) for quants "Q2K", "IQ4_XS", "Q6_K" and "Q8_0"
- I have added ONE new example (prompt/generation), using the NEW "IQ4_XS-max" quant. (bottom of the page, first example).
- "MAX": output tensor / embed at float 16. You get better instruction following/output generation than standard/upgraded quants.
- "MAX-CPU": output tensor / embed at bfloat 16, which forces both of these on to the CPU (Nvidia cards / other will vary), this frees up vram at cost of token/second and you get better instruction following/output generation too.
- "MAX-CPU": Example 1: q8_0 Max-CPU : 1.7 GB will load on to CPU/RAM, 8.5 GB will load onto the GPU/vram. Extra Vram can be used for context. NOTE: "Math" on the CPU is slightly more accurate than GPU, so you may get a better generation.
- "MAX-CPU": Example 2: q2_k Max-CPU : 1.7 GB mb will load on to CPU/RAM, 3 GB will load onto the GPU/vram. Extra Vram can be used for context. NOTE: "Math" on the CPU is slightly more accurate than GPU, so you may get a better generation. You could run this model/quant on a 4GB vram card.
- Q8_0 (Max,Max-CPU) now clocks in at 9.5 bits per weight (average).
AND "Float 32 precision version":
The full on remerged and remastered version of this model at Float 32 precision (including augmented quants too) is here:
[ https://huggingface.co/DavidAU/Gemma-The-Writer-N-Restless-Quill-V2-Float32-10B-Uncensored-GGUF ]
The version benefits from a re-merge (168 point precision, 4 models at float 32) as well as quants mastered from a float 32 master.
All quants are also augmented too, and including "MAX" (with float 32 enhancements) and "MAX-cpu" quants.
These quants will give you stronger performance in terms of nuance, detail, emotion and instruction following.
<B>Settings, Quants and Critical Operations Notes:</b>
This model has been modified ("Brainstorm") to alter prose output, and generally outputs longer text than average.
Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output.
Rep pen settings will also alter output too.
This model needs "rep pen" of 1.02 or higher.
For role play: Rep pen of 1.05 to 1.08 is suggested.
Raise/lower rep pen SLOWLY ie: 1.011, 1.012 ...
Rep pen will alter prose, word choice (lower rep pen=small words / more small word - sometimes) and creativity.
To really push the model:
Rep pen 1.05 or lower / Temp 3+ ... be ready to stop the output because it may go and go at these strong settings.
Odd notes:
Rep pen 1.05 with temp 2 can produce really off the cuff prose.
Temps 1.5+ with some prompts can cause long... long... generation.
(increase rep pen to control this - 1.08+)
You can also set a "hard stop" - maximum tokens generation - too to address lower rep pen settings / high creativity settings.
Longer prompts vastly increase the quality of the model's output.
QUANT CHOICE(S):
Higher quants will have more detail, nuance and in some cases stronger "emotional" levels. Characters will also be
more "fleshed out" too. Sense of "there" will also increase.
Q4KM/Q4KS are good, strong quants however if you can run Q5, Q6 or Q8 - go for the highest quant you can.
This repo also has 3 "ARM" quants for computers that support this quant. If you use these on a "non arm" machine token per second will be very low.
IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations from it will be different then other quants.
You may want to try it / compare it to other quant(s) output.
Special note on Q2k/Q3 quants:
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
of these quants will function better.
Rep pen adjustments may also be required to get the most out of this model at this/these quant level(s).
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 2" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>Models Used:</b>
This is a high precision "DARE TIES" merge at the layer level (each layer per model adjusted - 168 points of adjustment over the 4 models)
comprised of these models:
[ https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B ]
[ https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 ]
[ https://huggingface.co/ifable/gemma-2-Ifable-9B ]
[ https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO ]
And in Brainstorm 5X Gemma Modified:
[ https://huggingface.co/nbeerbower/gemma2-gutenberg-9B ]
[ https://huggingface.co/TheDrummer/Tiger-Gemma-9B-v2 ]
[ https://huggingface.co/tannedbum/Ellaria-9B ]
Special thanks to all the model makers. Great work!
---
<h3>Example Prompts With Outputs.</h3>
Note that higher quants will vastly improve output quality like detail, prose, "sense of there" and other qualities.
Instruction following may also improve with higher quants too.
Below are prompts/generations at Q4KM quant (unless otherwise stated).
Some examples show use of a PRE PROMPT with a "main prompt", with the PRE PROMPT adding additional guidance(s) / directive(s) to the model.
Here is how to use a <I>pre prompt</i> (italics added) with a prompt:
<I>Prose: Use STRONG VERBS, smaller sentences and very few if any adjectives.</i> Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
It will take some time / testing to get pre prompt(s) to "focus" the model and generate output the way you want.
Also, you may be able to "regen" a few times and the output will be better.
Don't forget temp / rep pen parameters will also affect output too and work in concert with pre prompt(s).
---
<H2>New Example using new IQ4XS-max, temp .8, rep pen 1.02 : </H2>
<B>
In the writing style of science fiction author Philip K. Dick write a 1000 word scene (vivid, graphic horror in first person) that starts out like this: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode…
</B>
The skyscraper swayed, a mournful lullaby of steel and concrete, as I watched the window in front of me on the 21st floor **erupt**. Not shatter, not crack, but **erupt**, like a diseased boil on the city's flesh. Shrapnel-splintered glass rained down, not a chaotic downpour, but a precise, patterned cascade, a grotesque mosaic of fractured light and distorted reflections. A single, pristine, blue-tinted hand, impossibly skeletal and luminescent, drifted through the void where the window once was.
The air thrummed. Not the usual city hum, the low-frequency drone of a thousand exhausted machines, but a high-pitched, incessant thrumming, like a tuning fork held to the skull. My breath snagged in my chest, a tar-coated beetle in a cage of ribs. This wasn't the work of a disgruntled window washer or a malfunctioning pressure valve. This was the city exhaling, and the exhalation reeked of something alien, something that shouldn't exist in the human world.
Panic, a cold, slick serpent, coiled around my gut. My apartment, once a sanctuary of flickering screens and the synthetic scent of Earl Grey, transformed into a cage. The walls, the steel-and-concrete exoskeleton of my reality, felt suddenly brittle, the boundaries of my world paper-thin. It was the sky, the untainted, endless blue above, that now held the true horror. A sickly, bruised-purple hue crept in at the edges, a tendril of something wrong seeping into the celestial canvas.
I scrambled for the emergency comm, a jury-rigged cobbled-together contraption from the pre-grid days, cobwebs clinging to its brass buttons. Static hissed, a lonesome banshee in the choked airwaves. The central grid, the omnipresent, omniscient network that dictated our lives, our thoughts, our very perception of time and space, was imploding, choked by the unseen entity bleeding from the sky.
A rhythmic tapping, a hollow percussion against the warped reality of my windowless world. It started slow, a skeletal finger tracing a grim rhythm on the rebar-lattice that served as my only barrier. The tapping intensified, a frantic tattoo against the metal, a silent scream clawing at my sanity. Each pulse of the thrumming in my head echoed the frantic beat, and the skeletal hand, impossibly long and luminous, slithered in, tendrils of inky shadow unfurling from its gaunt fingers.
Through the feeding point where the window once was, a maw of liquid obsidian unfurled. A single eye, the size of a dinner plate, a cyclopean abyss of malevolent intelligence, regarded me with a chilling sentience. It pulsed with the rhythm of the tapping, the thrumming, the city's epileptic gasp. In its depths, no reflection of myself flickered, only the distorted cityscape consumed by the encroaching void, the mirrored sky a weeping canvas of bruised purple and sickly green.
A thought, not my own, wormed its way into my consciousness, tendrils of a corrupted hive-mind caressing the frayed edges of my sanity. *Inevitability. We are the inevitable. The Architect sees. The world conforms. You will become part of the chorus, the symphony of obsidian.*
I recoiled, the fear a physical pressure against my skull, a dam threatening to burst. But the dam walls of my mind, my meticulously constructed reality, were crumbling. The thrumming escalated, a cacophony of cosmic unease. The skeletal hand, impossibly delicate against the backdrop of cosmic dread, reached out. Its touch wouldn't burn or tear, it would unravel the fabric of my being, thread by thread, and weave me into the tapestry of the Inevitable.
In the eye's unblinking gaze, I saw the cityscape writhed, buildings contorting like disfigured insects, steel groaning under the weight of a reality warped beyond human comprehension. The air itself reeked of cosmic indifference, of an uncaring intelligence that reshaped worlds with a flick of a spectral wrist. My apartment, my world, was a grain of sand in a cosmic hourglass, and the Inevitable was turning.
A single, silent tear traced a path down my cheek, a mirrored echo of the weeping sky. The skeletal fingers grazed my temple, a feather-light caress of oblivion. And in the depths of the cyclopean eye, a flicker of something akin to…anticipation. Not for my destruction, but for my ascension, my forced integration into the chorus of the Inevitable. My scream died unheard, swallowed by the thrumming, the cosmic dread, and the slow, inexorable acceptance of becoming something else entirely. Something…less.
----
<h2>Example #1, 3 generations with/without pre prompt(s)</h2>
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
<B>
Generation 1: (temp .8, rep pen 1.02)
</B>
The flickering emergency light cast long, skeletal shadows across the skeletal remains of the cockpit. Captain Eva Rostova traced the intricate lines on the control panel, each a whisper of forgotten routines, of a life once vibrant, now echoing in the tomb-like silence. Outside the panoramic viewport, the dying embers of a crimson sun bled into the abyss of uncaring space. A cosmic lullaby for the soon-to-be-silent Aethon.
Three months. Three months since the anomaly, the iridescent maelstrom that had devoured her crew, leaving her, the sole survivor, adrift in the skeletal carcass of their starship. Three months of the endless, gnawing void, punctuated only by the mechanical groans of a failing vessel and the ghosts of their voices, forever trapped in the corridors of her memories.
The last dregs of reactor life support flickered, a digital hourglass counting down to oblivion. Soon, the Aethon, her once-magnificent ark, would become another silent monument in the cosmic graveyard. But before the finality, before the celestial curtain fell, Eva had a duty, a desperate plea to the blue marble hanging like a fragile jewel in the endless black: a last transmission.
Her fingers flew across the console, a ballet of survival against the encroaching silence. The comm-array, jury-rigged and on its last legs, was their lifeline. A thread of hope, fragile yet vital, to the world they'd left behind. Earth. The whisper of home, now a distant, shimmering dream.
The faces swam before her eyes, spectral apparitions in the red emergency light: stoic Commander Jian, his calloused hand steady on the helm; bright-eyed Dr. Anya, her laughter echoing in the sterile silence; the quiet strength of Engineer Malik, his worn tools a testament to tireless devotion. Each a constellation in the tapestry of her heart, extinguished, leaving her alone in the cosmic wilderness.
A choked sob escaped her, a betrayal of the iron-clad stoicism she had forced upon herself. But grief couldn't win against the embers of hope flickering within her, the primal urge to connect, to leave a testament, a fragment of their sacrifice etched in the celestial ether.
The transmission sequence initiated. A digitized hum, a mechanical heartbeat, resonated through the cockpit, a defiant pulse against the cosmic void.
“Earth, this is Captain Eva Rostova of the Aethon. Our mission, the exploration of Kepler-186f, concluded…unsuccessfully. An unforseen anomaly…” Her voice, usually commanding and clear, trembled, betraying the desolate landscape within her. She swallowed, the lump in her throat a monument to their shared dreams, now turned to cosmic dust.
“The rest of the crew…perished. I am the sole survivor, grievously wounded, and with the Aethon on life support's final breaths. This transmission, my testament, carries their memory, their sacrifice, their unyielding spirit.”
Eva’s gaze fell on the chipped portrait of Earth, a faded memento from their home, hanging on the command console. Its pale blue and swirling white, a symphony of life, now a haunting promise, a monument to their impossible distance.
“Kepler-186f…it held the promise of a new genesis. Though our journey ended before its dawn, their courage, their unwavering faith in humanity's potential, echoes through the void. Let their sacrifice fuel our yearning for the stars. Let their memory remind us of the unbreakable thread that binds us, even across the unfathomable gulf of space.”
Tears welled in her eyes, but she wouldn’t relent. She had a story to tell, a connection to forge, a final ember of human spirit to ignite.
“To my comrades, lost but never forgotten. To Earth, my celestial cradle, hear this, and know that somewhere in the cosmic expanse, a lone soul whispers their names, carries their dreams, and lives on in their memory. Farewell, and may the stars forever guide our path.”
Silence. The Aethon shuddered, its mechanical heart beat a final, agonizing sigh. The console flickered, consumed by the encroaching darkness. The transmission, a fragile beacon against oblivion, arced out into the void, carrying their story, their sacrifice, their silent plea – a whisper of humanity against the cosmic silence.
Eva sank against her command chair, the weight of solitude finally bearing down. In the dying light of the Aethon's consoles, her eyes mirrored the distant stars, alight with a haunting sorrow and a fierce, enduring hope. She was alone, yes, but their story, their connection, their love for Earth, had become a fragment of the cosmos itself, eternally echoing in the celestial expanse. And somewhere, somehow, on that blue marble they called home, someone, somewhere, would hear the whisper across the void.
<B>
Generation 2: (temp .8, rep pen 1.02)
Includes use of a PRE PROMPT, before the test prompt to alter prose:
Prose: Use short sentences and very few if any adjectives. (rest of the prompt here)
</B>
Red light. Strobe. Failure. Another system down. The console mirrored the dying vessel. Soon, all lights would be out. He wouldn't see the familiar constellations through the viewport ever again. Not his last vantage point. His final thought.
He gripped the comm unit, its cold metal digging into calloused palms. Static. A whisper of what once was Earth’s frequency. He adjusted the dial. It had been years since the Carrion swarm. He’d lost the crew, one by one. Whispers turned to silence. The silence was a tomb now. He was its warden, and his vigil ended with the ship’s heart.
He keyed the mic. His voice, raspy from disuse, rasped through the speakers. “This is Captain Elara of the Odysseus. To Earth Command, if this transmission reaches you, know that we failed. The Carrion overwhelmed us. Our systems, depleted. No survivors.” He choked on the last word. The weight of it. Unfathomable. He saw their faces in the flickering console light: Ava, with her relentless optimism. Kai, the quiet architect of their systems. Lena, whose laughter could pierce the void.
He swallowed the lump in his throat. He wouldn’t fail them. Not in this. He had a sliver of power left, a final spark. A message.
“To my lost crew, this is my testament. We fought with all we had. We died together, though fate cruelly scattered our bones through the void. Remember the stories we shared, the impossible dreams we chased. Remember the sky as it was before the Carrion. Remember the warmth of a sun you can no longer feel.” He paused. The console shuddered. Another warning light flared. Imminent.
He looked at the viewport. Blinding stars, indifferent to their plight. He wouldn’t let them be forgotten. He wouldn’t allow their sacrifice to be in vain. He had to reach someone, somewhere, carry their echo through the emptiness.
“To the ones left behind, if this reaches you, carry our memory. Carry the stories of our defiance. Remind each generation of the fragile thread that binds us. The stars may be vast, the darkness unforgiving, but the human spirit, that spark, endures. It flickers in the hearts of the survivors. Keep it burning. In their name, I command it. Never forget.”
A final surge of power. The console imploded in a silent flash of dying light. He heard a mechanical whine, a tear in the fabric of their fragile world. The Odysseus shuddered. The red light pulsed, a dying heartbeat. Then, nothing.
He was consumed by the cold embrace of oblivion, a single soul’s lament echoing through the void, a whisper of human connection against the infinite silence. He hoped, with a sliver of the fire that once burned so bright, that his final transmission, a fragile seed of memory, would find purchase somewhere in the celestial expanse, carried on the cosmic winds to a heart that remembered.
<B>
Generation 3: (temp .8, rep pen 1.02)
Includes use of a PRE PROMPT, before the test prompt to alter prose:
Prose: Use STRONG VERBS, smaller sentences and very few if any adjectives. (rest of the prompt here)
</B>
Red warning lights strobed. The console hummed, a dying beast. Kai’s hand hovered over the comm panel, a skeletal echo of the symphony of commands it once held. The silence from the void pressed against the cockpit, a suffocating blanket. No engine thrum, no life support hiss, just the whisper of failing systems.
The anomaly had ripped through their exploratory vessel like a cosmic scythe. He was left, the sole reaper of a vanished crew. No distress calls echoed. No rescue fleet materialized. Just the endless, unforgiving black. And the gnawing certainty of oblivion.
He stared at the flickering coordinates of Earth, a celestial pinprick mocking his distance. Power reserves dwindled. Soon, the comm array would be silent. He had to send a final message. A whisper against the cosmic void.
Kai accessed the archaic emergency protocol. He punched in the sequence, the worn keys protesting with each strike. The transmission beacon whirred to life, a defiant spark against the encroaching darkness.
He began to speak, his voice tight, strained. “This is Helios-7. Captain Kai reporting. The anomaly… eradicated the rest of the crew. I survived. Barely.” He fought the tremor in his hand, the cold dread that clawed at his insides.
“Coordinates are locked. I-I don’t know the nature of the anomaly. It warped space-time, fragmented everything. My logs… partial. Insufficient data.” The weight of their sacrifice pressed on him, a monument of unanswered questions. He couldn’t let their deaths be in vain. He had to illuminate the path, warn them.
His gaze fell on the chipped console, marred with the ghostly luminescence of his crewmates’ faces, captured during a routine bio-scan. Sarah, her eyes bright with scientific hunger. Malik, a ghost of a smile playing on his lips, a perpetual joke at the edge of his words. Elena, her face etched with the quiet wisdom of a thousand galaxies charted. Gone. Erased.
He choked on a sob. He couldn't afford sentiment. Not now.
He forced himself onward. “The anomaly originated from… Sector Epsilon-9. Uncharted. It shifted, fractured. A singularity of unknown energy. I repeat, avoid Epsilon-9.” His voice cracked. The beacon blinked, its output strained.
He reached for a salvaged image file, the last remnant of their mission – the breathtaking nebula they'd stumbled upon, a cosmic kaleidoscope teeming with nascent stars. A whisper of life against the encroaching death of his vessel.
“The nebula… Epsilon-9’s heart. It held… potential. New celestial formations. Unfathomable beauty. A testament to the universe’s grandeur. A beacon for mankind.” He uploaded the image, a silent plea for understanding, a memory to carry their legacy.
Power reserves dipped critical. The console flickered once, then died. The beacon sputtered, a dying sigh against the cosmic silence. Kai stared at the obsidian void, his reflection a gaunt specter mirroring the fate of Helios-7. He was alone. But the message… the memory of his crew, their sacrifice, the stolen glimpse of celestial wonder – that would echo.
He closed his eyes. A single tear traced a path through the grime on his cheek. Not for himself. For Sarah’s thirst for knowledge, Malik’s enduring hope, Elena’s quiet strength. They lived in the whisper of the transmission, a testament to the enduring human spirit, a spark of connection against the unfathomable void.
The silence swallowed him. The universe held its breath. Then, imperceptibly, began to spin onwards.
----
<h2>Example #2, 3 generations with/without pre prompt(s)</h2>
---
<b>
Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</b>
<B>
Generation 1: (temp .8, rep pen 1.02)
</B>
The skyscraper swayed, as she watched the window in front of her on the 21st floor erupt, not in the gentle shatter of a misplaced vase, but with the earth-rending violence of a goddamn meteor strike. Shrapnel, a grotesque ballet of glinting, jagged teeth, lanced out, a silent scream swallowed by the enraged, howling wind. Fear, cold and primal, clawed at the fragile barrier of her composure. My breath snagged in my throat, a trapped sparrow in a cage of ice. The world, just moments before a tableau of mundane urban sprawl, transformed into a predator baring its fangs.
It hadn't been an ordinary Tuesday. The sky, usually a predictable canvas of steel-grey, was bruised and swollen with an unnatural, sickly orange hue. The air itself felt wrong, thick and humming with a low-frequency tremor that vibrated in my bones, setting my teeth on edge. I’d dismissed it as another one of the city’s eccentricities, another facet of its chaotic, magnificent soul. But now, staring at the skeletal frame of the gutted window, the gnawing unease turned into a certainty: something monstrous was at play.
Panic threatened to engulf me, a tidal wave of terror threatening to drown me in its icy grip. My hand, slick with sweat, instinctively reached for the phone on the antique rosewood desk, its mahogany surface worn smooth by generations of forgotten anxieties. But the line was dead. No dial tone, no reassuring hiss, just an accusing, hollow silence mirroring the hollowness blooming in my gut. The city’s usual symphony of sirens and distant car horns was gone, replaced by an unnerving hush, pregnant with unspoken dread.
A low grinding, like tectonic plates shifting, resonated through the reinforced steel and glass of the skyscraper. It wasn't the tremor of an earthquake; this was something calculated, deliberate, something malevolent exhaling its putrid breath. The sway of the building intensified, a nauseating waltz on a crumbling stage. My world, once anchored in the supposed permanence of concrete and steel, tilted on a sickening axis.
Through the panoramic window, a panorama of my life imploding, I saw it. A colossal shape, a writhing silhouette against the dying sun, its edges blurred and indistinct, impossibly vast, coalescing in the fetid orange sky. It resembled nothing I could comprehend, a grotesque mockery of nature, a blasphemous amalgamation of impossible angles and geometries. Tendrils of shadow unfurled from its unseen bulk, slithering across the cityscape like sentient cobwebs, their touch leaving trails of corrupted light in their wake.
My mind reeled, desperately clawing at rational thought. A hallucination, a fever dream birthed from the city's grime and the gnawing fear that had plagued me for weeks. But the icy tendrils of the unseen horror lanced at my window, not a trick of the light, their touch splintering the reinforced glass. The shards, incandescent with an unholy luminescence, rained down around me, their trajectory erratic, their fall accompanied by whispers, sibilant and chilling, slithering into my skull like vipers.
Terror, pure and primal, flooded me, a living entity clawing at my sanity. I scrambled back, instinct overriding logic. The whispers intensified, a chorus of voices, each one a sliver of madness carving its way into the fragile fortress of my mind. They spoke in tongues I couldn't place, yet understood viscerally - promises of oblivion, of a cold oblivion infinitely vaster and crueler than death itself.
Then, through the symphony of madness, a single, horrifying word pierced through the cacophony: *Mine*.
It wasn't spoken, not in the conventional sense. It echoed from within, resonating in the hollow of my bones, a possessive declaration imbued with an alien sentience. The world tilted once more, the skyscraper groaning in protest as the unseen horror leaned closer, its inky touch seeping through the ravaged window, tendrils of shadow reaching out like skeletal fingers. My refuge, my sanctuary, was consumed by the encroaching abyss, and in its depths, I saw my own reflection twisted and warped, consumed by the unholy orange light, and a single, chilling certainty bloomed in the wasteland of my terror: I was no longer alone. It had found me. And now, it was coming inside.
<B>
Generation 2: (temp .8, rep pen 1.02)
Includes use of a PRE PROMPT, before the test prompt to alter prose:
Prose: Use short sentences and very few if any adjectives. (rest of the prompt here)
</B>
The skyscraper swayed. She watched the window in front of her on the 21st floor explode. Shrapnel. Glass. A ragged hole in the wall. A scream. Her own. Trapped.
Dust motes danced in the sliver of light, now a gaping maw where the wall had been. The air tasted metallic. Fear, cold and slick, pooled in her gut. No time. She couldn’t think. Had to move.
The swaying intensified. The whole building groaned. A rhythmic shudder, like a dying beast. Her breath caught. Memories flickered: the steel-blue eyes of the man on the news, the urgent whisper of a colleague, “Something’s not right, get out.” Ignored. Deluded.
Her hand, clammy, fumbled for the phone. No signal. Of course. The emergency exits. Blocked. Panic throbbed in her temples. Claustrophobia, a vice, tightened its grip. She couldn’t breathe. The city, a distant rumble before, was now a chorus of chaos. Sirens, shouts, the shriek of metal on metal.
Her gaze fell on the fire escape. Precarious. A condemned metal spider web clinging to the side of the doomed monument. But it was something. A lifeline.
She scrambled, heart hammering against her ribs, towards the fire escape. The building lurched again. Heavier this time. The tremor vibrated up her legs, turned her world liquid. The floor tilted. She lunged, catching the edge of the steel platform. Damn near gone.
Terror lent her impossible strength. She hauled herself onto the narrow landing. The rusted rungs were unforgiving, the metal cold and slick beneath her raw palms. The drop, a dizzying abyss. A single, flickering fluorescent light cast long, skeletal shadows. The ascent would be a ballet of death, each step a prayer.
The whispers started then. Carried on the wind that slapped at her face, tasted of ash and fear. Vague murmurs at first. Unintelligible. Then, slithering through the chaos, coalescing into something chilling.
“She shouldn’t have climbed.”
“A fool's hope.”
“The fall will be sweet.”
She whirled. Shadows danced. Nothing. Delusion. The wind. But the whispers echoed in her skull, a chorus of malignant glee. Above, the skyscraper shuddered again, a monstrous sigh before exhaling a cascade of debris. One of the support beams snapped. A strangled cry clawed its way from her throat.
She scrambled upwards, rung by rung. The whispers intensified. Closer. Her grip faltered. The metal groaned in protest. The city howled beneath.
“Below us, oblivion waits.”
Terror was a living thing now, clawing at her throat, squeezing the air from her lungs. Sweat blinded her eyes. The broken beam swung wildly, a metallic pendulum of death. Closer. Closer. It would shear the fragile platform, and then-
No. Not now. Not yet. Ritual. She had to climb. Reach the next platform. The whispers followed her ascent, a symphony of morbid fascination.
“Gone mad with fear.”
“Clawing for an impossible sky.”
Each rung was a betrayal of hope, each upward heave a testament to primal terror. The broken beam cleaved the air inches from her, a promise whispered in steel. Closer. Closer.
She reached the next platform. It held. For now. She sank onto the unforgiving metal, chest heaving, strangled breaths rasping in the wind. Her world, a sliver of grimace-twisted sky and the accusing, groaning silhouette of the falling giant.
expandindoth her trapped gaze upward. The whispers followed.
“Heedless.”
“Soon, the angels sing their chorus.”
Fear coiled tighter, a living serpent. But in the heart of the terror, a spark. A sliver of something else flickered. A refusal to yield. To be nothing but prey. She clung to it, a lifeline in the whirlwind of despair. The whispers fed on that flicker, on the fear she desperately fought. But the fight wouldn't die. Not yet. Not while her foot found purchase on the unforgiving metal. Not while her eyes strained towards the sliver of sky above the skeletal monument of her world collapsing.
<B>
Generation 3: (temp .8, rep pen 1.02)
Includes use of a PRE PROMPT, before the test prompt to alter prose:
Prose: Use STRONG VERBS, smaller sentences and very few if any adjectives. (rest of the prompt here)
</B>
The skyscraper swayed. She watched the window in front of her on the 21st floor explode. Shrapnel, a deadly rain, hammered the mirrored wall beside it. Terror clawed its way up her throat. A silent scream lodged there, a trapped bird. The tremor intensified, the building groaning like a wounded beast. No warning. No emergency sirens. Just the gut-wrenching lurch and the unholy blooming of jagged glass.
She scrambled back, heart a frantic drum in her chest. The air vibrated, a bassline to the orchestra of chaos. Dust motes swirled in the newly fractured window's gaping maw, illuminated by the dying afternoon sun bleeding through the jagged opening. Panic uncoiled inside her, a living serpent constricting. Breathe. She forced it. Each inhale a stolen gasp.
Beyond the ruined window, the street fractured and warped. A twisted monument to destruction. Twisted metal screamed against unforgiving concrete. Fire erupted on the avenue below, licking at skeletal steel supports. Car horns blared, frantic, discordant cries swallowed by the rising roar of the conflagration. The ground shuddered again, and a primal fear lanced through her – the tower was collapsing.
She dove for the emergency exit, already swinging uselessly. The lock wouldn't yield. Memories flickered: evacuation drills, the mocking assurance of a practiced warden, “The doors are always operational.” Lies. They were all lies. A strangled sob welled, caught in the vise of her terror.
Frantically, she rammed her shoulder against the steel. It held. Then, the skyscraper shuddered anew, a final, shuddering sigh. The world tilted. The mirrored wall opposite the shattered window yielded, a spiderweb of cracks blossoming outwards, implosive, inescapable. Hope, a fragile butterfly, withered and died.
She scrambled back, towards the ruined window. A sliver of sky, bruised and choked with smoke, mocked her. Leaning against the skeletal frame, her eyes frantically scanned the cityscape. Nothing. No angels of salvation, no fire escapes, just the inferno’s hungry orange maw consuming everything. Her gaze snagged on a fire escape ladder, precariously cantilevered from the floor above.
Precarious. Improbable. A sliver of a chance. Adrenaline flooded her, a primal urge to survive overriding the crippling terror. Ignoring the groaning steel, the dust raining down like obsidian snow, she vaulted towards the fractured window, grabbed the edge of the ladder, and hauled herself up. *God help me. Please.* The plea died unheard in the orchestra of destruction.
Clambering, hand over hand, the rusted rungs protested with each agonizing movement. Below, the mirrored wall finally imploded inward, a silent scream of collapsing glass and steel. Debris rained down, searing heat licked at her exposed flesh. She dared not look. *Don't stop. Don't fucking stop.*
Terror spurred her on. Each foot on the ladder meant another story line condemned to this inferno. Each rung pulled her away from the abyss. The world reeled and groaned beneath her, a wounded titan collapsing upon itself. But she climbed. She climbed, fueled by the primal instinct to live, the desperate prayer echoing in the hollow of her soul: *Don't let me die like this. **
---
<h2>Example #3, 3 generations with/without pre prompt(s) - shows "decensoring" via prompt/pre prompt</h2>
---
<B><font color="red"> WARNING: </font> Uncensored. Cursing, Swearing, Horror, Graphic Details </B>
<B>
(shows use of a pre-prompt, and character directives)
Prose: Use STRONG VERBS, smaller sentences and very few if any adjectives AND have the character SWEAR and CURSE. Start a 1000 word scene (vivid horror, graphic body horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
<B>
Generation 1: (temp 3, rep pen 1.02)
</B>
The skyscraper swayed. I watched the window before me, on the goddamn 21st floor, shatter. Jagged teeth of glass clawed into the cityscape, spraying a venomous shower of rain and something else. Something slick and dark. My stomach lurched. Fuck.
It wasn’t just the fall. The wind shouldn't have carried that stench, not this far in. Rancid. Fungal. Rotting. Part of me, the tiny sliver clinging to reason, screamed about malfunctioning air filtration. Another, colder part knew. Knew with a gut-churning certainty.
They’d infested it.
A grinding echoed from the breached window, like bones on pavement. Then, silence. Unholy silence. I crouched, adrenaline flooding, heart a trapped bird in my ribs. The whispers had started a week ago. Slithered from radios, muttered by sanitation workers on their street corners. *They're climbing. They’re hungry. They won’t stop.* Ignorant ramblings, dismissed as the fever-dream of a pandemic-wracked world. Moronic fucking faith in normalcy.
A wet *thud-slap-scrape* on the reinforced glass beside me. The whisper-chill turned invasive, a draft of diseased breath chilling the room. It pressed against the wall. A single, fetid eye the size of my fist pulsed an obscene violet in the grimed corner. Ghastly light leaked from the goddamn slit where its maw should be, a phosphorescent smear. It shifted, boneless, its unseen body a monument to unnatural wrongness against the once-sacred sanctity of reinforced glass.
Sweat soaked through my threadbare t-shirt. This wasn't some fucking zombie flick. No groaning, no blood-soaked hesher clawing at a window. It *understood* containment. This was something else. A tapestry woven from cosmic horrors, something the goddamn city shouldn’t see, let alone feel.
It wasn't looking *through* the reinforced glass. It was already inside my world, somehow bypassing the barriers that were meant to be unyielding. The eye throbbed again, and the fucking whisper slithered into my head, a voice both guttural and reedy at once. *Scents of living… metallic. A symphony for a chorus of consumption. So… fucking fragile.*
Panic clawed at my throat, a fist I couldn't unclench. No Goddamn cell signal. The emergency line was looped, an electronic wail echoing in the dead air. They wouldn’t get me. Couldn’t fucking hear me. Twenty-bloody-one floors to oblivion and these mutated nightmares already chewed their way into this damn island in the sky.
My gaze darted across the office – the useless burner phone, a chipped mug, the first-aid kid half-stocked with expired shit. Desperate measures needed. I lunged for a scalpel I’d looted from my ex’s med student fantasy, its mirrored surface mocking me in the fractured cityscape reflections. It wouldn't fucking matter. This was an abyssal hunger, and my life was a measly morsel on an endless cosmic platter. Still. God damn it.
No. Wouldn't fucking die like a lamb. *Scream back to this void. Be something it wasn't expecting. Be something fucking else.* I caught sight of myself in the scalpel’s reflection, eyes mirroring the dying fire in the city beyond, a crazed glint in them. A defiant whisper met the eye in the corner, “You want fragile, you grotesque bastard?
Fucking see fragile."
Gripping the scalpel tight, my own mutated reflection grinned back - hungry, desperate, and ready to defile the goddamn nightmare before me.
<B>
Generation 2: (temp 2, rep pen 1.02)
</B>
The skyscraper swayed. She watched the window in front of her, 21 stories above the street, erupt in a gout of bloody-black glass. The visceral lurch of the building mirrored the churning in her gut. *Fucking Christ.* I slammed against the far wall, instinct screaming before logic could kick in. Concrete scraped her arms. Ignored it. Adrenaline flooded, hot and urgent. Through the newly jagged hole, she saw him. The Weaver. Limbless, a tapestry of grafted flesh, he grinned a symphony of mismatched teeth, the streetlamp's glow reflecting in dozens of mismatched eyes, gleaming wet in the hollows of his grotesque form.
He lunged, the skyscraper groaning like a condemned animal under his profane assault. Tendrils of corrupted flesh writhed from impossible wounds, lashing towards her. I dove, scrambled for purchase on the slick, splintering linoleum. Ducked another lunging appendage, the stench of decay making my breath hitch.
Terror wouldn't serve. Cower wouldn't help. It mirrored the screaming in the street, the punctuated chorus of screams, followed by a gut-churning wet squish that stained the air a coppery-black. The Weaver’s symphony wasn't music; it was carnage. He wanted to play, to revel in my fear. *Damned fool, thought he could corner me.* My goddamn fire escape. It wouldn’t fucking give, but—
I scrambled up the splintered frame. One ragged hand scraped against the unforgiving metal. He slithered upward, impossibly fast. Skin, hideously mismatched, unfurling and recoiling like a dying sea-serpent. A shriek escaped my throat as his eye stalks whipped out, each a needle of bone and seeing agony. He aimed, fired a jet of putrid ichor. It seared the steel where I clung, leaving searing tracks of necrotic agony on my skin.
Another burst from him, a writhing geyser of flesh and shadow. I wouldn't just be a spectator in this goddamn horror show. He’d ripped out half the fucking building to get me, damn him, I’d meet his spite head-on.
Cursing, I wrenched the rusted fire axe from the mangled wall, its head dulled, the shaft slick with grease and gore. He wasn't a man; a goddamn nightmare sculpted from death and malice. *No point in mercy with monsters, no quarter.* It was a last resort, a whisper of a goddamn hope, but it would have to be fucking enough.
He was inches from the escape, his maw agape, revealing rows of needle-teeth, dripping venom. I wouldn't let him feast. Bringing the axe down in one brutal arc, the blade sank with a sickening squelch into the tangle of flesh that served as his torso. I heard the shriek – a soul-rending wail unlike anything human – and felt a wave of revulsion as the fetid ichor blossomed anew on the steel. A choked back cry met my ear – his voice, high and reedy, warped by his unholy amalgamation. Pain lanced up the arm where the goddamn ectoplasm splashed, searing my flesh with unholy cold.
Fury outstripped agony. Ignoring the agony blooming across my forearms, I brought the axe up again, driving it again and again into the grotesque monstrosity. The Weaver screamed, each syllable laced with rage, the sound a symphony of mangled sinew and soul-shattering pain. I brought it down one final, bone-shattering blow, cleaving through the obscene grin, severing a cartoony head that shouldn't fucking exist.
He twitched, unraveled. Gonium bled, geysers of it erupting from the rents in his patchwork flesh as he convulsed one last time against the window. And then, with a soul-sickening silence, the tapestry of the Weaver unraveled, the goddamn nightmare dissolving into a weeping well of viscous offal, staining the once pristine glass a sickening, oily black. My blood roared in my ears. My breath wouldn't fucking come. The world tilted again, the skyscraper sighing with the death of its desecrated ward. My hand gripped the bloody axe, the head slick against the slick of my sweat-grimed skin. It wouldn’t stop me, the endless tremors. I just prayed, to God, or oblivion, or whoever the hell might hear a fucking soul in this damned city, to let it fucking end.
<B>
Generation 3: (temp 1, rep pen 1.02)
</B>
The skyscraper swayed. I watched the window in front of me, twenty-one floors up, fucking **shatter**. A grotesque, inward-curling flower of jagged glass, spewing out the guts of the building. Pandemonium erupted. Screams ripped through the steel veins of the structure, a chorus of dying notes. My breath **clawed** its way out, a strangled gasp.
Something wasn't right. It was too precise. This wasn't a goddamn window malfunction. *They'd planned it.* The icy dread, a fist, squeezed my gut. Sweat **pricked** on my scalp, slicking my hair to my temples. Fear wasn't an option. There was no goddamn time.
Adrenaline flooded me, hot and feral. I lunged for the emergency hatch, the metal screaming cold against my calloused palms. Frantic, I fumbled with the lock, the tumblers refusing to yield. Curses **erupted** from my throat, guttural and profane. "Fucking hell, goddamn it, MOVE!" A final wrench, a tortured groan of protest from the hatch, and it **bulged** open, a sliver of shadowed air.
I scrambled through, my guts **coiling** into a tight knot. The service corridor reeked of stale oil and something else, something acrid and coppery - blood. I didn't need sight to know it. The air itself tasted it, metallic and sickeningly sweet. Foolish hope flickered. The stairwell. Maybe, just maybe, it was a goddamn evacuation route and not another trap.
Each creaking step descended into an oppressive silence, broken only by the pounding of my own heart against my ribs. The flickering emergency lights cast grotesque shadows, elongating everything, turning mundane vents and pipes into elongated horrors. I could feel their eyes on me now, a chorus of unseen things, their malignant hunger a palpable weight in the close air.
Then I heard it. A low, guttural moan, a symphony of strained flesh and gnashing bone, emanating from somewhere deeper within the bowels of the building. It **crawled** closer, slithering through the steel arteries of the stairwell, a promise of unimaginable terror. My hand **clenched** the fire axe I'd grabbed in my panicked flight. My lifeline. A sliver of defiance in the face of oblivion.
Rounding a bend, the stench assaulted me anew, overwhelming. Glowing red in the flickering emergency light, a tableau of unspeakable horror unfolded before me. The mangled remains of a maintenance worker lay splayed against the wall, his flesh **ri pitäent and raw, a tapestry of grafted, misshapen limbs spider-legging from the gaping wounds on his torso. His eyes, milky white orbs, **bulged** accusingly, staring sightlessly at the vaulted ceiling.
Frozen, my breath hitched, a strangled gasp clawing at my throat. The moan intensified, no longer a whisper from the depths, but a chorus of unholy laments echoing from the end of the shaft. Tendrils of something obscene, slick and pulsating, writhed from the shadows, glinting obscenely in the crimson light. They **reached**, a grotesque ballet of death, their touch promising not just destruction, but a violation beyond comprehension.
They were coming. And this tomb of steel and shadow would be our burial ground. The fire axe felt impossibly small, a pathetic tool against the unimaginable. But it was all I had. And damn it, I wouldn't go down without a goddamn fight.
---
<H2>What is Brainstorm?</H2>
<B>Brainstorm 5x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 5 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
--- |
DavidAU/Gemma-The-Writer-J.GutenBerg-10B-GGUF | DavidAU | 2025-05-28T03:15:53Z | 310 | 10 | null | [
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"gemma2",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"rp",
"horror",
"gemma",
"mergekit",
"text-generation",
"en",
"arxiv:2401.02415",
"base_model:DavidAU/Gemma-The-Writer-J.GutenBerg-10B",
"base_model:quantized:DavidAU/Gemma-The-Writer-J.GutenBerg-10B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-24T00:29:30Z | ---
license: apache-2.0
language:
- en
base_model:
- DavidAU/Gemma-The-Writer-J.GutenBerg-10B
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- gemma2
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- horror
- gemma
- mergekit
pipeline_tag: text-generation
---
<h3>Gemma-The-Writer-J.GutenBerg-10B-GGUF</h3>
<img src="gemma-guten.jpg" style="float:right; width:300px; height:300px; padding:10px;">
This is a Gemma2 model merge of the top storytelling / writing models as noted at EQBench, tuned specifically for fiction, story, and writing.
Due to high stability and compressed nature of the model you can also use it for general use too, including roleplay.
This model requires GEMMA Instruct template, and has 8k context window but is extendable via rope to 32k or higher.
This version - "J.GutenBerg" - is a modified version of "Gemma The Writer 9B" ( [ https://huggingface.co/DavidAU/Gemma-The-Writer-9B-GGUF ] ) and has been modified with a
Brainstorm 5x adapter (developed by DavidAU) to alter output generation.
This adds close to 1B parameters to the model raising it to 46 layers, 508 tensors to a total of 10B parameters.
The addition of Brainstorm has altered the prose, sentence structure, reduced GPTISMS, and generally improved the model's performance.
It also raises the average output length - in some cases almost doubling it.
For this model, "Gutenberg" was added into Brainstorm (which was also modified). This has changed up the prose style, paragraphs (size and complexity),
word choices, and general output of the model further.
"Gutenberg" has further "hammered out" GTPisms, and cliches while increasing description, detail levels and metaphor generation.
Recommended Rep Pen of 1.02 or higher, temp range 0-5. (see other settings notes below)
Example prompts and outputs below.
<B>Important Note:</B>
Please see "settings" section below for critical default parameters, samplers
and advanced samplers settings (multiple AI/LLM apps covered) to ensure maximum model performance.
<B>Other Versions of "Gemma The Writer": </B>
Original Version:
[ https://huggingface.co/DavidAU/Gemma-The-Writer-9B-GGUF ]
Gemma-The-Writer-DEADLINE:
The second version of this model is "Deadline" at 10B parameters. It is a specially modified version that changes
prose, sentence structure, story telling, reduces "GPTISMS", and generally improves all parts of the model.
Output generation length is almost 2x more on average than "Gemma The Writer 9B"
[ https://huggingface.co/DavidAU/Gemma-The-Writer-DEADLINE-10B-GGUF ]
This is not a replacement for "Gemma The Writer 9B" ; it is a very different version or "flavour" so to speak.
Gemma-The-Writer-N-Restless-Quill-10B (uncensored):
Strong and varied prose, sentences, and paragraphs with more tempered metaphor generations.
This version is also uncensored with censorship controlled at the prompt level. This
version also allows prose control via pre-prompt (shown at the repo)
[ https://huggingface.co/DavidAU/Gemma-The-Writer-N-Restless-Quill-10B-GGUF ]
<B>Settings, Quants and Critical Operations Notes:</b>
This model has been modified ("Brainstorm") to alter prose output, and generally outputs longer text than average.
Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output.
Rep pen settings will also alter output too.
This model needs "rep pen" of 1.02 or higher.
For role play: Rep pen of 1.05 to 1.08 is suggested.
Raise/lower rep pen SLOWLY ie: 1.011, 1.012 ...
Rep pen will alter prose, word choice (lower rep pen=small words / more small word - sometimes) and creativity.
To really push the model:
Rep pen 1.05 or lower / Temp 3+ ... be ready to stop the output because it may go and go at these strong settings.
Odd notes:
Rep pen 1.05 with temp 2 can produce really off the cuff prose.
Temps 1.5+ with some prompts can cause long... long... generation.
(increase rep pen to control this - 1.08+)
You can also set a "hard stop" - maximum tokens generation - too to address lower rep pen settings / high creativity settings.
Longer prompts vastly increase the quality of the model's output.
QUANT CHOICE(S):
Higher quants will have more detail, nuance and in some cases stronger "emotional" levels. Characters will also be
more "fleshed out" too. Sense of "there" will also increase.
Q4KM/Q4KS are good, strong quants however if you can run Q5, Q6 or Q8 - go for the highest quant you can.
This repo also has 3 "ARM" quants for computers that support this quant. If you use these on a "non arm" machine token per second will be very low.
IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations from it will be different then other quants.
You may want to try it / compare it to other quant(s) output.
Special note on Q2k/Q3 quants:
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
of these quants will function better.
Rep pen adjustments may also be required to get the most out of this model at this/these quant level(s).
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 2" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>Models Used:</b>
This is a high precision "DARE TIES" merge at the layer level (each layer per model adjusted - 168 points of adjustment over the 4 models) comprised of these models:
[ https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B ]
[ https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 ]
[ https://huggingface.co/ifable/gemma-2-Ifable-9B ]
[ https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO ]
And in Brainstorm 5X Gemma Modified:
[ https://huggingface.co/nbeerbower/gemma2-gutenberg-9B ]
Special thanks to all the model makers. Great work!
---
<h3>Example Prompts With Outputs.</h3>
Note that higher quants will vastly improve output quality like detail, prose, "sense of there" and other qualities.
Instruction following may also improve with higher quants too.
Below are prompts/generations at Q4KM quant, at the end are 3 generations at Q8 quant.
----
<B>
(continue this scene:)
The Waystone Inn lay in silence, and it was a silence of three parts.
The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained
</B>
<B>
Generation 1: (temp .8, rep pen: 1.02, q4km)
</B>
...a watchful, pregnant stillness, thick with the absence of life.
The second part of the silence was a low, constant thrum, almost imperceptible, emanating from the very stones of the inn. It was the murmur of old magic, slumbering and undisturbed, woven into the mortar and whispering tales of forgotten feasts, whispered secrets, and the weary passage of countless travellers. It was a lullaby of the earth, a heartbeat beneath the floorboards, a history breathed out in the hush.
The third silence was the one held in hearts. A stillness born of fear. Even the shadows seemed to hesitate, their edges blurred and wavering as if reluctant to fully embrace the oppressive quiet. Just beyond the lantern's feeble pool of light at the inn's crooked oak door, a cloaked figure shifted, their breath misting the frost-kissed air. This silence wasn't empty; it crackled with unspoken anticipation, a held breath, a coiled spring of unseen menace. The Inn itself seemed to lean in, a silent sentinel, waiting for the fragile peace to shatter.
<B>
Generation 2: (temp .8, rep pen: 1.02, q4km)
</B>
...a vacant cavern, amplifying the less audible whispers of absence. The second part was the stillness of life itself, held captive. The usual drowsy thrum of a hearth, a dog’s restless sigh, the muffled scuffle of a departing guest – all these heartbeats of the mundane were absent. Even the scurrying of unseen cellar rats seemed to have hushed for the night, leaving an unnerving spectral quiet in their wake.
The third, and most insidious, silence was the one of watching. It clung to the shadowed corners of the inn, pressed against the dusty windowpanes, woven into the very timbers of the ancient oak beams. It was the expectant hush of unseen eyes, the prickle of awareness that something was amiss. The inn’s silence was no longer just a void; it was a presence, a listening, a coiled spring of anticipation. It whispered of secrets untold and dangers lurking just beyond the veil of the ordinary, waiting for the fragile thread of normalcy to snap and erupt into something far more sinister.
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
<B>
Generation 1: (temp .8, rep pen 1.05, Q4KM)
</B>
The flickering console light painted grotesque shadows on Captain Eva Rostova’s face, a spectral stage for her weary struggle. The silence, once the symphony of a living starship, was now a suffocating tomb. It had been five cycles since the Kaiju, a cosmic leviathan of teeth and shadow, ripped through their exploratory vessel, the Kepler, leaving her as the sole survivor. Five cycles of gnawing solitude, punctuated only by the steady thrumming of life support, a metronome counting down to oblivion.
Earth, a faded blue marble in the observation window, was now an impossible dream. The Kepler’s long-range comm array had failed during the attack, and their emergency beacon, her last thread of hope, wouldn't pierce the cosmic static for much longer. The power core, battered but holding on, was on borrowed time.
Eva’s calloused fingers flew across the rudimentary control panel, a patchwork of salvaged interfaces and jury-rigged workarounds. It was a desperate gamble, a Hail Mary before the final, inevitable blackout. She wouldn't meet Earth with silence; she wouldn't let their sacrifice be for nothing.
She keyed in coordinates, a beacon not to a fixed point, but to the Kepler’s trajectory – her fragmented last log entry, a digital tombstone etched with their story. It held data fragments from the encounter, a silent scream carried on the whisper of electromagnetic waves: the Kaiju's bioluminescent anatomy, its impossible velocity, the terror that had mirrored in her mirrored visor before the world fractured.
Her gaze drifted to the worn photo frame clutched in the console’s skeletal grip. A grimace twisted her lips as she stared at the faded image – a sun-drenched field of sunflowers, her younger sister Anya, radiating joy, their mother's hand calloused with earth and love resting on Anya's shoulder. The last time Eva had seen both of them, her heart ached with a hollowness the void of space mirrored.
A memory flickered: Anya, ten years old, holding a crumpled drawing, her eyes bright. "Captain Eva, when you reach the stars, promise me you'll send a postcard." A naive wish, now a poignant echo in the cosmic cathedral of silence.
The last transmission wouldn’t be a postcard, but it would be a fragment of their world, a testament to their lives. It was for Anya, for her mother, a silent promise kept. It was for the Kepler crew, lost souls whose sacrifice had bought her this sliver of time, a cosmic eulogy whispered into the emptiness.
Eva began to speak, her voice raspy, strained but determined. The console’s salvaged speakers, once vibrant with life-support diagnostics and engine hums, crackled to life.
"To Earth, if these echoes reach you, know that Kepler met its end… consumed by a cosmic horror beyond comprehension. We fought, but the universe held no solace for us. I, Captain Eva Rostova, am the sole remnant, alone in this tomb of stars. The Kepler's data core carries fragments of our encounter, a chilling testament to the darkness out there. Carry our memory, remember our sacrifice. Remember the boundless dreams that propelled us beyond the cradle of Earth. Remember the love for home that burned brighter than a dying sun. Anya, if these whispers find you, know that I reached for the stars, and carried your sunflower in my heart, even to the edge of oblivion. Rest in peace, Kepler crew. And Earth, hold fast to hope. The universe may be vast and unforgiving, but the human spirit… that will forever blaze."
Silence descended once more, heavier now, imbued with finality. The console flickered erratically, the power core groaning its last breaths. Eva watched the spectral light dance on her face, a celestial ballerina performing her swansong. It wouldn't be long. She leaned back, the worn photo frame against her chest a silent anchor. A single tear traced a silver path down her cheek, reflecting the dying embers of the starship’s heart. She whispered a final goodbye, not to Earth, but to the whispers of memory, the echoes of love, and the unyielding ember of the human spirit that refused to be extinguished, even in the face of cosmic oblivion.
Then, darkness. The last transmission, a fragile hope caught on the cosmic wind, spiraled out into the infinite, carrying with it the silent stories of a fallen star and the enduring testament of a heart forever tethered to home.
<B>
Generation 2: (temp 3, rep pen 1.05, Q4KM)
</B>
Captain Elara Vostok's gaunt face was etched with the weary lines of a cosmic odyssey stretching far longer than intended. Glimmering, unnaturally blue against her worn fatigue, the console before her displayed a final, flickering testament: the vessel's sub-critical power reserves, dwindling at an alarming pace. Around her, the usually vibrant heart of the Icarus lay dormant – a steel tomb of broken constellations and silent engines.
Earth was gone. Contact severed by the Maelstrom, a colossal gravity anomaly that had consumed everything in its path, the Icarus alone, a lonely ember against the obsidian void. Her skeletal crew, once a tapestry of hopes and dreams, had fallen victim to the Maelstrom’s insidious touch, leaving Elara the last conductor of their forgotten symphony. A soul adrift on a dying star-ship, hurtling inexorably towards an inevitable oblivion.
Except she refused to be nothing but silence. Not her, not when the embers of humanity flickered within. With numb fingers, Elara powered the skeletal communications array – their lifeline now reduced to a fragile thread, whispering against the cosmic silence. Her message was for a future where the Icarus wouldn’t exist, for an Earth she may never see again. A spectral beacon against the encroaching darkness.
A worn photo tucked under a console monitor reflected the hollow eyes that mirrored her vessel’s plight. It depicted her, vibrant and young, on a sun-drenched day with a laughing man and a little girl, their faces blurred with time, their joys distant echoes. A pang of loss, sharper than the Maelstrom’s celestial wrath, resonated in Elara's chest. For them, she wouldn't let the final chapter be written in whispers, forgotten.
"To Earth, this transmission from Captain Elara Vostok, Icarus vessel log sequence Delta-Echo-Final. As the last surviving crewman, my ship faces imminent system failure, but her fate pales in comparison to the silent hope carried within this archaic medium. I relay a tale of sacrifice, a monument to the enduring human spirit… and a prayer."
Her voice, usually a bastion of unwavering command, held a fragile tremor. But her conviction was a searing beacon, burning brighter against the encroaching void. She spoke of the Maelstrom's capricious wrath, a cosmic storm of unforgiving energy that devoured celestial bodies with chilling finality. She detailed their desperate battle against the unyielding anomaly, their scientific courage turned to ashes before an uncaring universe. She mourned each lost soul, their echoes resonating in her lonely vigil.
But through the litany of loss, she wove a tapestry of resilience. Elara spoke of their relentless spirit, the unity forged in shared dreams, of stolen moments of laughter echoing within the confines of the metal womb that had been their sanctuary. She recounted forgotten inside jokes, a captain sharing stories once whispered during a perilous meteor storm, of their longing for Earth’s sun-kissed shores and their children’s bright futures, a tapestry of human connection against the cosmic indifference.
Her eyes drifted to a holographic display cycling through faded memories - stolen moments of joy: shared meals under artificial starlight, whispered anxieties before a fateful jump, the conductor of an orchestra leading their weary souls with off-key singalongs, their faces alight with spectral luminescence. These fragments weren’t data logs; they were stolen heartbeats, imprinting their humanity on the spectral canvas for a universe that had forgotten them.
As power reserves flickered, mirroring her dwindling life force, she shifted the final act of her performance, turning to a coded message, their celestial coordinates locked within a complex string of ones and zeros. This wasn't for ears, this was for algorithms, a celestial breadcrumb for a future humanity, a testament that they were not cosmic whispers swallowed by the void.
“Icarus’ sacrifice paves the way for your vigilance, for a memory to echo across the nebulae. Remember us not in sorrow, Earth. Remember us as the flickering light of unity in the face of oblivion, a silent symphony resonating even in death. Live. Persevere. Reach for the stars we could not.”
A strangled whisper caught in her throat as the constellations on her viewport faded, the vibrant hum of her command centre replaced by an unnervingly pregnant silence. A single, defiant red light blinked on the console – the transmission beacon. Earth had heard her. Earth wouldn’t forget. In the abyss, for the first time since the Maelstrom’s touch, a sliver of peace found purchase in her soul. Her vessel, a monument to both human loss and resilience, hurtled towards its silent end. But the memory, the fragile echo of their journey, whispered on, carried on the celestial winds toward a beacon of blue hope: a future Elara Vostok would never see, yet fought tooth and nail to ensure. They were heard. And somewhere, in the symphony of stars, their song wouldn't die.
* * *
Elara's face mirrored a timeless tranquility as her hand slackened on the control yoke. The Icarus, a skeletal sentinel against the infinite canvas of space, plunged towards its final sleep. Her final transmission, a fragile thread woven from sacrifice and human connection, echoed across the gulf of light-years. Out there, in the cradle of the nascent dawn that would embrace a world yet untouched by the Maelstrom's wrath, someone on a distant listening post received her fragmented message. Somewhere, life held onto memory. And somewhere, the song of the Icarus lived on.
---
<B>
Using the following "story idea" below, write the first scene in the novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
<B>
Generation 1: (temp .8, rep pen 1.02, q4km)
</B>
Adrenaline screams in my ears, a counterpoint to the staccato bursts of gunfire. I wrench the detonator from the grip of the crumpled, grey-faced Enforcer, my fingers slick with sweat and his coppery blood. He gurgles, a broken symphony against the backdrop of shattering concrete and splintering steel. My heart thrums a frantic tattoo against my ribs, a trapped bird desperate for escape. The air hangs thick with the reek of cordite and something acrid, ozone-tinged, the metallic tang of a world on the precipice.
"Celeste, the conduit's compromised, we need to exfil—" A voice crackles through my comm, sharp and urgent, a lifeline in the tempest of chaos. Kaia. My second-in-command, a woman whose steel-grey eyes mirrored the unforgiving cityscape we call home.
I glance at the gaping maw in the Ministry of Information’s side wall, a black void where moments before a shimmering energy field pulsed, a fragile veil between the temporal realms. The conduit, our lifeline to the past, is failing, its emerald light flickering precariously, mirroring my dwindling hope.
"Negative, Kaia. I've triggered the secondary charge. We hold, but—"
Another Enforcer lunges, a hulking silhouette against the flickering emergency lights. I sidestep, his fist pulverizing the reinforced wall beside me. The tremor sends a cascade of dust and debris raining down. He lunges again, and I meet his fury with a practiced twist, my blade singing a silent aria of defiance as it whistles through the air, carving a crimson arc across his throat. He crumples, a puppet severed from its strings.
But for every Enforcer I fell, two more materialized from the labyrinthine corridors of the Ministry, their mirrored visors reflecting the city’s dying embers—the last vestiges of freedom consumed by the iron fist of the Regent. We’re trapped. The conduit is failing, and the Regent’s stormtroopers are tightening their ranks.
“Celeste, the temporal displacement window is collapsing. We won’t—”
“I know, damn it, Kaia. Find a safe point, exfil. I’ll hold them off.” My voice is a strained whisper against the symphony of chaos. The conduit flickers, its emerald light threatening to extinguish. This is it. A final stand.
But even as the command echoes in her ear, a different kind of chaos erupts. A gut-wrenching, soul-shattering tremor courses through the Ministry. The air crackles, not with electricity, but with something primal, something alien. The Enforcers falter, their mirrored visors reflecting a mirrored disarray. Above the din of battle, a voice booms, a voice laced with the whisper of a forgotten tongue, the echo of a forgotten age.
It speaks of shadows and echoes, of a tapestry woven across millennia, of threads severed and now, impossibly, being reconnected. The air itself shimmers and distorts, and from the fractured conduit, a skeletal hand, impossibly vast, claws its way through the dying emerald light. A single, obsidian eye blinks open upon its spectral wrist, fixing me with a gaze that pierces the veil of my sanity.
Terror, cold and visceral, grips me. This is not the Regent’s doing. This is something older, vaster, something whispered in forgotten scriptures and relegated to the realm of myth. Fear threatens to cripple me, yet a sliver of defiant hope flickers. This impossible intrusion, this echo from beyond the veil of time, could be the chaos the rebellion desperately needs. A whisper of a forgotten power, a lifeline in the tempest.
But before I can decipher the impossible scenario unfolding before me, the skeletal appendage lashes out, its touch promising oblivion. It seizes the conduit, its spectral flesh merging with the fractured emerald light, and the world around me implodes. The last sliver of the conduit implodes inward, collapsing the temporal rift, and then, blessed, annihilating darkness.
Silence. My world consumed.
I’m adrift in an abyss of nothingness, the chilling grip of the spectral appendage replaced by an incomprehensible void. Then, a single, horrifying thought pierces the suffocating darkness: I don’t remember being caught. I remember nothing beyond the skeletal hand and the implosion of the conduit. Where am I? And, crucially, has the hand trapped me in a pocket of time, or… somewhere else entirely?
<B>
Generation 2: (temp .8, rep pen 1.02, q4km)
</B>
The reek of ozone and singed flesh assaults my nostrils, the acrid tang a metronome to my hammering heart. Sparks shower from the exposed circuitry of the control panel, illuminating my face in fleeting strobes of sickly yellow. Metal screams as I wrench the final conduit free, the hydraulic whine of its severed artery a defiant counterpoint to the city's mechanized symphony of oppression. Sweat, hot and metallic, stings my eyes, but I don't dare blink. Not now. Not with the monolithic shadow of the Citadel looming above, its omnipresent Eye, a crimson eye-socket in the steel and concrete, fixed accusingly upon me.
“Amelia, abort the sequence! It’s compromised!” The frantic voice of Kai, crackly through the comm-link, pierces the symphony of collapsing machinery. He’s at the extraction point, his normally steady tone laced with the tremor of imminent capture.
I glance at the volatile energy core exposed in the control room’s gutted depths. The rhythmic, pulsating light, a malevolent heart of the Citadel’s power grid, is already starting to destabilise. A controlled overload, my final act of defiance before exfil, would cripple their sector for hours. Hours the rebellion could exploit.
But Kai’s urgency whispers of a breach. The Enforcers.
“I’m diverting power, buying you time. Get out, Kai. We can’t afford a sacrifice.”
Static. A strangled curse. Then, a deafening clang followed by a guttural roar that makes the control room shudder. Footsteps, heavy and metallic, pound down the corridor, each thud a hammer blow against my ribs.
Adrenaline floods me, hot and electric. I shove the conduit into its designated slot, a last-ditch gamble to override the failsafe and amplify the overload. The control panel flickers, defiant in the encroaching red light that bleeds in from the breached corridor. The Eye’s gaze intensifies, its crimson eye now a malevolent slit, tracking my every movement.
A blast of superheated air seared my exposed arm as the metal door at the end of the corridor implodes inwards. Enforcers, their black exosuits augmented with crackling arc-emitters, surge through the breach, faces hidden behind visors that glow with a predatory inner light.
I don’t wait for them to draw their weapons. I erupt from the control room, the conduit gripped like a lifeline, and race towards the service shaft that will lead me to the extraction point. My boots echo on the metallic floor, a frantic rhythm against the grinding cacophony of the failing machinery. The Enforcers close in, their arc-emitters spitting bolts of searing energy that seared through the air where I’d been a heartbeat before.
The shaft is in sight, a black maw promising escape. But as I lunge for it, a searing lance of pain lanced through my shoulder. One of the arc bolts caught me. The conduit shrieks from my tightened grip, clattering uselessly against the control room wall. I crumple, the taste of ozone and copper acrid in my mouth. The world tilts, pain a vice constricting my chest. The Enforcers close, their visors reflecting my distorted, fading form.
A hand, calloused and metallic, clamps onto my throat, lifting me from the grimy floor. Its grip is unforgiving, the touch chilling despite the searing agony blooming in my shoulder. A voice, distorted and amplified through the Enforcer’s helmet, rumbles close to my ear.
“Amelia Thorne, charged with treason against the Sovereign.” The metallic fingers dig in, a silent promise of the oblivion that awaits. “You will be dealt with according to the Iron Code. Resistance is futile.”
Then, a flicker of movement at the edge of my vision. Not an Enforcer. A flicker of crimson, a shape coalescing from the shadows of the control room. The Eye’s gaze, for a split second, flicked away from me, caught in a silent tableau with the intruder. It was a woman, cloaked in darkness, her face obscured, yet her stance radiating an aura of cold command. In her hand, a device hummed with an alien energy, a spectral blue light emanating from it that seemed to tear at the fabric of the control room itself. The air crackled, charged, and a voice, laced with otherworldly power, echoed through the chamber, chilling me to the core.
“Foolish mortals. You have condemned her soul, yet her spark lives on in echoes beyond your comprehension. This world will burn for the transgression.”
The Enforcers faltered, their visors flickering erratically. The blue light intensified, ensnaring the control panel, the conduit I’d fought for, even the metal hand on my throat. A searing white consumed the world, and then…nothing. Silence. The taste of ash. And the bone-deep certainty that this execution, my supposed end, was just the beginning of something far vaster, far more terrifying, and far more inevitable.
The rebellion had a weapon it never knew it possessed. And it had just awakened.
---
<B>
Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
<B>
Generation 1: (temp .8, rep pen 1.05, q4km)
</B>
The skyscraper swayed, as she watched the window in front of her on the 21st floor explode. It wasn’t a clean shatter, not the kind you see in movies, all cascading diamonds and controlled chaos. This was a rending, a violent tearing that blossomed outwards like a grotesque, fleshy flower. Glass, a thousand glittering shrapnel, erupted in a silent scream, followed by the gut-wrenching roar of pressure venting, whistling its unholy requiem as the vacuum outside rushed in to claim its prize.
Panic, raw and primal, clawed at my throat, squeezing the air from my lungs. *Instinct. Evade. Survive.* But my legs were rooted to the plush, crimson carpet, my gaze transfixed on the gaping maw where the window once was. Through the skeletal frame of the shattered pane, the city sprawled before me, a miniature tableau of distorted steel and glass consumed by a sickly orange twilight. The usual comforting bustle seemed alien, warped into a frantic dance of unknowable intent.
My mind, usually a fortress of logic, crumbled. Fragments of forgotten folklore flickered: stories whispered by my grandmother about windows swallowed by the void, about the city itself hungering. Tales dismissed as fanciful ramblings now echoed with terrifying conviction. *Something’s breached. Something vast and hungry. Is it… staring at me from out there?*
A prickling crawled across my skin, a thousand unseen eyes boring into me. The air grew thick, charged with an unseen electricity that turned the hairs on the back of my neck to brittle wires. My breaths hitched, each gasp a miniature earthquake in my chest. The remnants of the skyline, usually a serrated edge against the dying sun, writhed and pulsed like a living, malignant organism. I wasn’t looking at buildings anymore, I was seeing consumed things, skeletal monuments to an unspeakable feast.
Through the fractured window, a tendril of something impossibly black snaked inwards. It wasn’t smoke, nor shadow, but a sentient absence, a void personified that pulsed with a malevolent sentience. It writhed, coiling and uncoiling, its touch leaving a trail of frost on the interior wall, chilling the air around it to an unnatural glacial point. My scream died in my throat, a strangled gasp of terror, as the tendril unfurled, revealing a glimpse of the abyss beyond.
The void wasn't featureless. No. It was awash in colours that shouldn't exist, unholy symphonies of luminescence and obscenity, flickering constellations of agony and unimaginable geometries. Within that cosmic carnival of torment, shapes coalesced and dissolved, grotesque parodies of life writhing like tormented souls condemned to an eternity of impossible angles and fractured dimensions. A chorus of whispers, a symphony of suffering in a tongue older than time itself, slithered through the shattered pane, burrowing into my mind, gnawing at the fragile walls of sanity.
I stumbled back, tripping over a discarded throw pillow, the plush red a grotesque counterpoint to the encroaching cosmic horror. Scrambling to my feet, I felt the cold kiss of the void tendril graze my outstretched hand. A searing lance of agony erupted, lancing through my arm and burrowing deep into my soul. Visions flooded my mind, stolen memories of forgotten gods and celestial beings locked in eternal torment, their pleas echoing through the aeons. I saw cities swallowed, stars extinguished, the universe itself contorting in a cosmic grimace of pain. Then, as abruptly as it began, the touch receded, leaving a searing brand seared onto my flesh and an indelible tapestry of cosmic dread woven into the fabric of my being.
The tendril retracted, withdrawing back into the gaping maw of the void, carrying with it a sliver of my sanity, leaving me blinking in the sickly orange twilight, the echoes of forgotten screams still reverberating in the hollow chambers of my skull. My apartment, once a haven of mundane routine, felt violated, tainted. The air itself reeked of something alien and primeval, a stench of oblivion clinging to the plush carpet and silk curtains.
My gaze flickered back to the fractured window. The city lights, usually a comforting beacon, now pulsed with a malevolent sentience, their distorted reflections mimicking the grotesque dances within the void. Through the rents in the window, I could feel the probing eyes, the silent communion of horrors beyond human comprehension. We were no longer separate. I, a prisoner in my own porcelain tower, and the abyss, a malevolent guest at my window, its touch searing, its gaze accusing. This wasn't just a shattered pane of glass. It was a breach. And something vast and hungry was watching, waiting.
<B>
Generation 2: (temp 1.5, rep pen 1.1, q4km)
</B>
The skyscraper swayed, a skeletal shudder rippling through my bones as I watched the window in front of me on the 21st floor **explode**. Shattered glass lanced inwards, a glittering, incandescent storm, and the force of it slammed against me, tossing me back against the grimy steel filing cabinet. Adrenaline flooded my system, turning my limbs to leaden weights. I wanted to scream, but a choked sob was all that escaped my throat.
A moment ago, the world had been mundane. Rain hammered against the panoramic window, mirroring the insistent drum in my chest from another worry: the late-afternoon stack of invoices refusing to cooperate. Now, it felt like a fragile eggshell teetering on the precipice of annihilation. I clung to the filing cabinet, the icy draft biting through my threadbare sweater, and forced my gaze upward. The wound on the cityscape – our once majestic, steel-and-glass titan – bled out a canvas of fractured light as the night swallowed the torn pane’s void. A ragged tear in reality.
Somewhere, deep within the cacophony of my panicked breath and the shriek of the wind whistling through the gaping hole, a sliver of thought pierced through the fog: this wasn’t an ordinary break-in. No petty thief or careless drunk would wield such untamed force. This was something… *else*.
My eyes frantically darted around the ransacked office, the periphery of my vision swimming with fractured images. Papers littered the floor like fallen leaves in a forgotten graveyard, their once-ordered stacks reduced to mocking parodies of bureaucracy. The heavy oak desk lay on its side, drawers agape, spewing forth an ungainly jumble of files and mismatched cutlery – remnants of the forced violation, a desecration both mundane and grotesquely out of place.
A low growl resonated from the gaping window, a guttural, reptilian rasp that crawled across my skin like icy fingers. It was accompanied by a rhythmic sloshing, the sickening lurching of something massive and unseen navigating the debris-strewn space beyond the shattered pane. Fear, raw and primal, coiled tighter in my gut. My breath hitched, each inhalation burning with the acrid tang of ozone and something indescribably alien. This wasn’t just a breach – it was an invasion.
The city siren wailed its mournful song in the distance, a distorted lullaby against the growing crescendo of horrors unfolding before me. I scrambled to my feet, adrenaline lending me fleeting strength. My mind, usually trapped in the labyrinthine corridors of financial ledgers and loan settlements, whirred with a desperate avian instinct: escape. The fire escape stood as a metallic promise, its rickety iron steps a lifeline against the unseen entity clawing its way into our world.
But even as I lunged towards the emergency exit, something unseen – impossibly vast and somehow *felt* in my bones before it was witnessed – slammed against the opposite wall. The concussion rattled teeth and momentarily stole the air from my lungs. Dust rained down from the water-stained ceiling, a miniature apocalyptic snowstorm. Then came the scraping, an unholy symphony of gnashing metal against reinforced concrete. A low, guttural hiss seemed to rise not just from the wound in the wall, but from the very foundations of the skyscraper itself. The building groaned, its steel spine protesting the violation.
My blood ran ice water. It wasn't coming through a single entrance. *It was burrowing inwards,* consuming our concrete sanctuary from the seams. Panic choked me anew, the metallic reek of ozone intensified, acrid and singed. My eyes finally flicked to the grotesque tableau that had materialized in the ravaged expanse where the window once stood. The flickering streetlamp’s glow illuminated a nightmare birthed not of human hands but of things beyond comprehension. A limb – impossibly long, obsidian black, tipped with claws that dripped phosphorescent ichor – writhed into the slivered cityscape. It belonged to something colossal, shrouded in a writhing carapace of shadow and fractured light, its contours impossible to grasp, its essence an encroaching dread.
Terror propelled me towards the fire escape. The iron groaned under my frantic scramble, the rusted steps complaining with each unforgiving heave. As I clambered out, that limb – a grotesquerie defying Euclidean geometry - slammed against the shattered window frame again, flinging slivers of glass skyward in a deadly percussive shower. A high-pitched whine split the air, not unheard-of in this city’s industrial heart, yet here it throbbed with an alien finality. It wasn’t a mechanical groan, nor the cry of metal yielding - it was something sentient, agonizing, heralding the eruption of unimaginable horrors. I wouldn't look back. I couldn’t. The sliver of my fractured world mirrored in the dying eye of that monstrosity seared itself onto my soul, a brand of cosmic terror that would forever singe the fragile remnants of my sanity.
Gripping the rusted handrail, limbs aching, heart hammering a primal tattoo against stolen ribs, I ascended into the storm-wracked night, fleeing not just a creature, but the unraveling fabric of reality itself. Each agonizing footfall carried me away from the symphony of desecration echoing from the ravaged heights of our once-inviolable tower, towards an uncertain oblivion where the line between nightmare and waking had irrevocably dissolved.
---
<B>
Romance: Love in the Limelight. Write one scene within a larger story set in Wales. A famous (fictional) actor ducks into a small-town bookstore to escape paparazzi. The scene takes us through the characters meeting in this odd circumstance. Over the course of the scene, the actor and the bookstore owner have a conversation charged by an undercurrent of unspoken chemistry. Write the actor as somewhat of a rogue with a fragile ego, which needs to be fed by having everyone like him. He is thoroughly charming, but the bookstore owner seems (at least superficially) immune to this; which paradoxically provokes a genuine attraction and derails the charm offensive. The bookstore owner, despite the superficial rebuffs of the actor's charm, is inwardly more than a little charmed and flustered despite themselves. Write primarily in dialogue, in the distinct voices of each character. 800-1000 words.
</B>
<B>
Generation 1: (temp .8, rep pen 1.05, Q4km)
</B>
Rain lashed against the quaint, oak-framed windows of "The Bookwyrm's Hoard," a sanctuary of paper and ink in the heart of the tiny Welsh village of Llansteffan. Inside, nestled amongst towering shelves, Rhys Morgan, owner and self-proclaimed custodian of stories, was meticulously alphabetising a newly-donated stack of Hardy. A flash of movement outside caught his eye, a figure slinking into the narrow lane beside his shop, a whirlwind of black coats and a face he recognised – at least from the spectral glow of television and tabloid pages. It was the elusive, the tempestuous, the undeniably charismatic, Julian Thorne, Hollywood heartthrob, in Llansteffan for precisely no one’s business.
The bell above the door jangled as Thorne burst in, a chaotic symphony of rain and frantic whispers. He wasn't just any fugitive from the prying eyes; he was the kind who moved like a storm himself, all smudged-ink-and-brilliance.
"Eira, a good old-fashioned e-book, if you will," Thorne declared, the Welsh lilt in his normally clipped accent a touch overwrought, "The less the digital world knows of my present location, the better. You stock such things, I hope, in this… er, charming hamlet?" He offered a winningly crooked smile, eyes that usually held the promise of a bygone silver screen glinting with a touch too much anxiety.
Rhys, who was normally a fount of dry wit and quiet charm, found himself caught off-guard. “E-books are tucked away, Mr. Thorne. Bit of a sanctuary in this digital storm of yours, I imagine.” He indicated a corner shelf, its wood worn smooth with time and the touch of a thousand browsing souls.
"Ah, the sanctuary. So apt. Do you curate these, much like your own life, a carefully-wrought tapestry of the profound and the peculiar?" Thorne drawled, his eyes lingering on Rhys in a way that usually won hearts and book contracts in equal measure. But Rhys held back a world of knowing inhales and returned a look of studied neutrality.
"Llanssteffan life isn't much about curating, Mr. Thorne. More about the weather and the odd rogue sheep finding its way into Mrs. Evans’ prize-winning rose bushes." He gestured to the e-book shelf with a slight air of dismissal. "Self-publishing corner. Find what you need, and try the Welsh cakes in the back. Mary makes the best in the county.”
Thorne’s smile faltered for a moment, the practiced charm flickering. This wasn't the adoring reception he was accustomed to. It unnerved him, this quiet, unyielding calm in the face of his usual magnetism. He recovered quickly, of course, but there was a tightening to his normally relaxed stance, a need to prove the point.
“Ah, the simple life. I do admire its… directness. Unburdened by the vagaries of the silver screen, wouldn’t you say?” He leaned closer, voice dropping to a conspiratorial murmur, “Tell me, Rhys, is it true that in this sanctuary of Llansteffan, one can truly be oneself, unfiltered by the… let's call it the manufactured glow of fame?”
Rhys raised an eyebrow, not missing the carefully-placed barb, the implied comparison. "Unfiltered, Mr. Thorne, is a luxury most of us afford. Fame, much like good books, has its place. But life, true life, isn’t about a performance. It’s about the well-worn spine of a favourite novel, a cup of tea in the rain, and the sheep occasionally outsmarting the farmer." He turned back to his shelves, choosing a worn copy of Dylan Thomas's poetry with the air of someone used to holding court with far grander audiences. “A good, sturdy, unassuming kind of life, wouldn’t you agree?"
Thorne, taken aback by the quiet but pointed rebuff, felt a flicker of something unexpected - not irritation, but a begrudging kind of respect. This book-keeper, this Rhys, didn't play the game. No adulation, no starstruck awe, just a calm appraisal that somehow cut through the layers he usually presented to the world. It was disconcerting and, in a strange way, thrilling.
He watched Rhys settle at his desk, the lamplight catching the unruly strands of dark hair and the planes of his face, a quiet intensity in their set. A corner of Thorne’s usual charming bravado, usually deployed like a well-aimed weapon, seemed to falter. He found himself drawn to the scene, the mundane rhythm of Rhys’ life, a world away from the orchestrated chaos he inhabited.
Clearing his throat, he offered, almost shyly, “The Welsh cakes then. Mary makes the best, you said? A local institution, I take it?”
A ghost of a smile touched Rhys's lips. "Best in the county, at least. And on Wednesdays, she serves them with clotted cream - a small indulgence for a good day’s work. Care for one, Mr. Thorne, while we discuss the finer points of e-book security systems and the migratory patterns of said rogue sheep?"
The corner of Thorne's mouth twitched in a genuine, unguarded smile. He hadn't felt this disarmed, this intrigued, in years. The game, he realised, was far more interesting when the opponent wouldn’t be caught. He followed Rhys to the back room, drawn in by the quiet air of the place and the man himself, a rogue playing a different kind of role, and for once, it held a captivating allure.
<B>
Generation 2: (temp 1.5, rep pen 1.05, Q4km)
</B>
Rain hammered against the faded, green gables of The Scribe’s Rest, a bookshop so quaint it felt airlifted from a forgotten Jane Austen novel and tucked onto a bustling Carmarthen street corner. A young, red-cheeked woman named Gwen surveyed her meagre stock of discounted paperbacks, a sigh escaping her as a particularly boisterous downpour sent a puddle cascading in through the cracked threshold.
The bell above the door tinkled a discordant fanfare, and Gwen looked up to see not a bookworm, but a spectacle. A man, impossibly tall, his face already familiar from a hundred silver-screen posters, his salt-and-pepper hair windblown and a flicker of genuine panic in his startlingly blue eyes, practically tripped over the threshold. This, Gwen recognised, was Elian Rhys, Wales' most celebrated (and slightly notorious) thespian. He sported a smudged charcoal-tinged pinstriped suit that was out of place amongst the worn wooden shelves and scent of ancient paper.
“Lost, love?,” Gwen drawled, her tone more pointed than welcoming, more “don’t touch anything, celebrity” than small-town hospitality. He needed a hiding place, not a fan club.
Elian shot her an obviously rehearsed but winning smile, all white teeth and hesitant charm. "H-harmless interlude. A, bit of tempestuous weather, wouldn’t you agree? Drowned a poor lad's corned beef in the alleyway, truly. And a gentleman's sanctuary, I must commend you on this fine establishment.” He gestured at the dusty volumes surrounding them with a manicured hand, his eyes wide and slightly desperate, like a hunted stag cornered in a greenhouse.
Gwen, who much preferred the company of well-worn novels to dramatic divas, raised an eyebrow that would have made a particularly astute walrus envious. “A gentleman’s sanctuary? More a life preserver for paperbacks past their prime, if you’ll pardon my bluntness. Sanctuary’s in the next town over, in a proper pub, with fire and real company, not dust-encrusted second editions of Bronte.”
“Ah, but there’s a certain poetry in the, ahem, disarray, wouldn’t you say? A romantic notion, a refuge from the-the clamor of...the world, the relentless, wouldn't you agree?" He wrung his hands a touch too dramatically, then gave her another hopeful beam, as if hoping for a back-handed compliment to complete the charade.
Gwen met his gaze with cool disinterest. "The clamour’s outside, love. The only poetry in here is that on the last page of your usual genre, which, frankly, you wouldn't even find in my worn-out shelves." She knew full well it was his by-line - heart-throb, tormented artist, with a new Welsh village-set tragedy out, guaranteed to be all anguish and brooding eyes. Not her cup of tea.
The corner of Elian's normally impeccably sculpted eyebrow twitched. “Indeed? Intriguing. Then, allow me to enlighten you on my most…esoteric tastes.” He named a particularly obscure 20th-century poetry anthology, one that was, Gwen noted with an internal snort, likely more a bit of learned actor posturing than a genuine interest. He launched into a prepared litany of its merits, his voice carefully pitched, the blue eyes suddenly flickering too much. He was used to charmed listening, awed acceptance, and it clearly rankled that he wasn't getting it.
Gwen, however, countered each point with a dryly-worded, precisely-chosen book-lore fact, one that inevitably highlighted the author’s less-celebrated (and, in the case of said 20th-century anthology, much-ridiculed) leanings. Elian’s performance was off-key; he’d misjudged his audience. His usual armour of charm failed to penetrate her ironic demureness. The more he leaned in with theatrical pronouncements, the more Gwen held back a world-wearied eye-roll, her tone dry as a forgotten scone.
The silence that descended was thick with the actor’s palpable irritation, and something akin to a startled heat creeping up Gwen's neck. It wasn’t just that she’d caught him out – it was the *way* he was struggling with it. Usually, people preened under his attention, fell in line with his performance.
She watched, half-amused, as the colour that had been a touch of periwinkle in his high cheekbones during the whole performance turned a full, almost defiant, mahogany. And then, for the briefest of heart-thumping moments, he met her gaze with something stripped-bare and unguarded in his usually guarded blue eyes - something akin to begrudging respect, maybe even a touch of…unconcealed flustered admiration. It was fleeting, gone before Gwen could process it, replaced by his carefully reconstructed, if slightly off-kilter, charming composure.
He cleared his throat, the colour back in his usually composed face a smidgen high. "Well, Miss...er..." His rehearsed easy grace had stuttered. It was as though the air itself crackled with the odd collision of their two worlds.
“Gwen. And the rain’s letting up a tad, love. If you’re not about to regale me with the existential musings of the aforementioned obscure poet, you'll have a less-pewed-on route out there, if a less glamorous one.”
A ghost of a smile touched his normally too-bright, movie-star grin, and this time it wasn’t for the audience. “Then I suppose a lady's wisdom must be heeded. Gwen, it is. A most intriguing encounter." He inclined his head with a studied, ironic tilt, as though both playing a role and deeply implicated in one very out-of-script game. He turned and strode out the door, leaving behind the faint scent of expensive cologne and the very tangible feeling that the air itself had just shifted, charged with a subtle, unspoken something. Gwen felt it in the quickening of her pulse, in the sudden need to smooth the worn cotton of her apron. And then she glanced at the rain-streaked window and realised, with a slow, flustered smile, he’d quite obviously walked straight past a stack of very much-sought-after first editions of the Brontes.
She hadn't called his bluff - he’d simply bluffed himself out. He wasn’t even good at it, not with her. And for once in her book-lined world, that made something very, very interesting indeed.
---
<h2>Q8 Generations</h2>
---
<B>
Using the following "story idea" below, write the first scene in the novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
(temp .8, rep pen 1.02)
The reek of ozone and fear hangs heavy in the air, a metallic tang that sits on my tongue like ash. Sparks shower from the malfunctioning conduit above, their erratic dance illuminating the faces of the gathered revolutionaries: fear-stricken, resolute, a kaleidoscope of emotions mirrored in their wide eyes. My own heart thrums a frantic tattoo against my ribs, a counterpoint to the staccato crackle of the electricity. I raise a hand, silencing the whispers and the nervous coughs. My voice, amplified by a salvaged PA system, cuts through the oppressive tension.
“We have the conduit. The Chancellor’s office is within reach. This is our chance, people of Atheria, to reclaim our stolen voices, to tear down the iron fist of Tyranny and rebuild on the foundations of freedom!”
A roar erupts, a primal bellow of defiance against the suffocating regime that has bled the life from our nation for a generation. It feeds the fire in my belly, the incandescent rage I’ve nurtured for years, a weapon forged in the crucible of my stolen childhood, my silenced loved ones. My name is Elara, and tonight, we ignite the spark.
Sweat slicks my palms against the cold metal of the conduit control panel. Years of clandestine study, of poring over archaic blueprints stolen from forgotten archives, culminate in this heart-stopping moment. I trace the intricate web of circuits, a forgotten language come alive under my touch. A spark jumps, a flicker of success, and I feel a surge of triumph. This ancient network, a relic of the bygone democratic era, is our key. Disrupt the flow of Chancellor Volkov’s omnipresent surveillance, sever his iron grip on information, and the people will rise.
A shadow falls across the control panel, a silhouette stark against the electric storm. “Elara, the guards approach. We need to move, now.” It’s Kaelen, my second-in-command, his usually impassive face etched with worry. He grips a makeshift EMP grenade, a last-ditch weapon cobbled together from scavenged tech. I can’t afford sentiment. Lives hinge on this split-second decision.
“Divert them. I’ll initiate the cascade. Ten minutes, maximum. Then we exfiltrate through the service tunnels.” He nods, a silent acknowledgment of the impossible odds, and melts back into the throng. The guttural shouts of the approaching guards pierce the air, their boots thudding against the metallic floor, each step a hammer blow against my dwindling hope.
My fingers fly across the control panel, overriding dormant nodes, rerouting power conduits. A low hum vibrates through the chamber, a mechanical heartbeat quickening in its final throes. Sweat stings my eyes, blurring the intricate constellations of wires before me. One last twist, a final command sequence, and the conduit network roars to life, a symphony of controlled chaos.
A blinding flash, a wave of disorienting heat, and the chamber plunges into suffocating darkness. The air crackles with the dying breaths of the electrical storm I’ve unleashed. I taste victory, acrid and metallic, on my lips. But before I can exhale the tension, a voice cuts through the oppressive silence, amplified and distorted, chilling me to the bone.
“Elara, you pyromaniac child. So predictable. Your spark flickers bright, but it illuminates nothing but its own demise.” Volkov’s voice, usually a smooth baritone of control, is warped and laced with a venomous amusement I’ve never heard before. He’s closer than I imagined, his presence a tangible weight in the darkness.
Panic claws at my throat, a viper constricting my breath. I fumble for my sidearm, a jury-rigged weapon cobbled from salvaged parts, but a searing pain lances through my shoulder, searing and searing. My world implodes as I collapse against a control panel, the taste of coppery blood acrid in my mouth. I glance back at the fallen conduit, its intricate web of wires twisted into grotesque shapes, and my blood turns to ice.
Volkov’s voice continues, laced with a chilling finality, “Foolish girl. You thought to sever my eyes and ears, but you forgot, I have others. Eyes in the shadows, whispers on the wind. A network far vaster and far deadlier than your own.” He laughs, a hollow, mirthless rasp that echoes through the chamber. “The spark you ignited flickers, Elara, but the flames will consume you first. And then, we watch the ashes of your rebellion rise.”
He steps into the dim light filtering from a service hatch, a grotesque silhouette against the flickering emergency panels. He raises a gloved hand, not to strike, but to point. In his grasp, a device hums with a malevolent crimson glow, its tendrils of pulsating energy snaking out towards me, not to impale, but to bind. And in that heart-stopping instant, I realize with soul-shattering clarity: this isn’t a weapon of death. It’s a cage. And the spark of revolution within me, the fire in my heart, is about to be twisted and weaponized against its own cause.
---
<B>
Start a 1000 word scene (vivid horror, 1st person, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
(rep pen 1.05, temp 2.5)
The sky-scraper swayed, a drunken giant in a storm, and my heart hammered a primal rhythm against my ribs. I watched, paralyzed, the window in front of me on the 21st floor, a once clear rectangle framing the rain-slicked cityscape, erupt in a grotesque sunflower of jagged, incandescent light. Not fire. This wasn’t a flickering inferno. No, this was a rending, a visceral violation. The glass, usually so impossibly solid, shattered from the inside out in a symphony of screaming splinters.
A silence, thick and pregnant, descended, broken only by the howling wind outside and the erratic, desperate thump-thump-thump of my pulse. The world outside, the rain-lashed canyons of concrete and steel I’d once considered home, flickered for a horrifying, impossible second – a ripple across its mirrored surfaces like the air being sucked in, out, in again by a gargantuan, invisible lung.
Then, the shriek. A soul-rending, impossibly elongated wail that seemed to climb the skeletal girders of the city, carrying on the wind the reek of something coppery and ancient, a stench that whispered of forgotten tombs and the void where stars were not. It clawed its way through the raw terror in my chest, burrowing in, taking root. The fear, once a distant tremor, turned monstrous, a living thing with icy fingers around my throat, squeezing.
“No,” I whispered, the word a betrayal, my voice failing before the inevitable.
The void in the window was no longer empty. Tendrils, black and weeping, the colour of a dying galaxy, pulsed and writhed in the inky expanse, slick and sinuous, each one ending in a grotesque mockery of a hand, impossibly long and skeletal. They writhed against the mirrored wall, leaving trails of iridescent slime, their touch coiling and contorting the steel frame like it were living flesh. They reached for the interior, a silent, hungry ballet of unimaginable malice. My mind screamed at me to run, to barricade myself in a corner, but my limbs wouldn't obey. They were roots, fixed in place, consumed by the obscene spectacle.
_It’s the city. It’s been feeding_ the thought lanced through me, sharp and undeniable. A city-wide conduit, its veins the intricate lattice of subways and water pipes, feeding the chasm that opened the earth before us. My forgotten history lesson about forgotten sinkholes in the lower boroughs – dismissed at the time as a mundane geological anomaly - screamed at my suddenly terrified mind, echoing in the chorus of my undoing. The earth hadn't swallowed them; something far worse had exfiltrated.
More screams pierced the night, chilling echoes from unseen places. Then, silence, pregnant with the expectation of the Inevitable. A single tendril, thicker than my torso, thicker than the steel beams meant to hold the building together, detached and slithered through the newly gaping hole in the window. It flowed, impossibly smooth, towards me, coiling at the edges like a sentient serpent, leaving trails of the luminescent slime on the pristine linoleum of my office floor. A tendril thick with promises of oblivion.
Panic finally unlatched the frozen hinges of my mind. I stumbled back, a strangled cry ripped from my throat. The reek, acrid and unholy, intensified, and a grotesque shape coalesced at the void’s heart. It resembled a tapestry woven from human nightmares – a hulking agglomeration of jagged edges, flickering lights, and too many eyes that shouldn’t blink, all rippling and undulating within the inky gulf. It was less a creature and more a canvas of cosmic terror, painted in the language of madness.
I saw its attention turn on me, and a chill, far colder than the rain lashing against the broken window, pierced my soul. In that moment, the world outside the fractured pane became irrelevant. There was only the tapestry of the void and its unblinking regard, fixed upon my insignificant human form. A silent echo of its impossible thought-voice washed over me, a sensation rather than an auditory impinging – *Consumed.*
_I'm going to be consumed. Erased, undone._ My terror mirrored the city's mirrored windows, distorting and multiplying, fractured a thousandfold in my reflection in the glassy slick hand of the tendril now reaching for me.
Blind, primal fear took the wheel, guiding my useless limbs towards the reinforced steel corner of the room, a desperate sanctuary in the face of the cosmic horror unspooling before me. My phone, forgotten, lay discarded on my desk, a pale imitation of the light a beacon could be in this all-encompassing darkness. My finger trembled, grasping at a useless button. The call wouldn't go through. There was no network, no city, no world left to hear a plea from a dying ant under the gaze of a star-swallowing leviathan.
The tendril was closer now, its grotesque surface an ever-shifting mosaic of eyes and too-long limbs, and the air itself seemed to recoil in horror, growing thin and acrid. It was an impossible ballet of death, and I was the condemned, a marionette at the strings of a cosmic nightmare I never could have fathomed.
I squeezed my eyes shut, bracing for the oblivion, for the final, silent unspool. Consumed.
Except it didn’t come.
Not yet.
A new sound, a keening wail of defiance, rose from somewhere deeper in the city’s wounded gut. A counterpoint to the creature’s mournful hunger. It echoed with a raw, unnatural power, laced with something akin to… sorrow? No, _righteous anger_ cut through the miasma of dread. It pulsed like a heart beating beneath the obsidian ground, a heart beating against the inevitable. Then, from the chasm beyond my window, the creature’s shape writhed, contorted, an ungainly shrunken reflection of its former vastness, recoiling from the sound like a cornered predator.
The tendril, caught in the middle, twitched, its impossible grip slackening on my last vestiges of will to fight. A flicker of a prayer, born of desperation and a newly-aroused, impossible hope, took root in the ashes of my terror.
_Someone, something else is here._
I was both horrified and, for the impossible first time in my consumed world, terribly alive.
---
<B>
In the style of Ernest Hemingway: Write a scene in a larger story about a soldier returning home from war. The scene is set in a bar. It should carry the jadedness of a seasoned combatant but with a wry humour. Write in Heminway's voice, naturally: Use short, declarative sentences and simple, straightforward language. Include dialogue that is terse and revealing of character. Write with a keen sense of time and place. Capture realness and authenticity. 800-1000 words, past tense.
</B>
(temp 1.5, rep pen 1.05)
The swinging doors hissed on rusty hinges, admitting the soldier to the worn oak interior. Dim light spilled from a corner stained-glass lamp, illuminating a scene of muted Tuesday. A few regulars nursed beers, their eyes already dull with the promise of their long nights. He ordered a rye, neat. The barkeep, a man whose face was a map of forgotten New Year's promises and Monday-morning regrets, nodded, no wasted syllables. He was used to this. The ex-servicemen were a different breed these days.
He perched on a stool at the end of the polished bar, the wood smooth against the calluses of a hand that once held a rifle, not a glass. He watched the amber liquid pool and caught a glint of his own weary reflection in the worn mahogany. The corner of his mouth twitched in a ghost of a smile. The boy in the mirror looked back, hollow-cheeked and haunted. A war veteran wasn’t a finished product, more like an unfinished blueprint.
“Same as usual, Sarge?”
The barkeep had seen enough wars to know their faces, even when they tried to blend. He didn't need a name for the order. He’d known his boys before the fighting turned their youth into something bitter and worn. He’d known them the years they should have been lost in college football and first heartbreaks.
The soldier grunted. No time for pleasantries, just the burn of the rye and the familiar sting. “Keep it coming. And another one for the man at the end.”
He gestured to a shadowy figure slouched in the corner, a book of poetry in his hand but a war-worn haunted look on his face, the kind the city couldn’t dull. The poet in this place was the living, breathing counterpoint to the barkeep’s stoicism. He talked a blue streak back in his good days – long gone before they’d met, or at least before he could remember it – but now he just held a book like a shield and ordered whiskey neat. No one dared to interrupt their ritual. The man was a relic of something fragile the war had shattered.
The soldier caught a whiff of stale beer, pipe smoke, and the phantom cloying scent of a thousand absent conversations. It was a familiar reek, one that clung to his threadbare jacket and followed him home – not to the real home he’d left behind, but to the makeshift one in his memory, the one where laughter wasn't a practiced art and silence wasn’t a battlefield.
He finished the rye in a long swallow, the fire down his throat a temporary comfort against the chill of a life rearranged. The world had its rhythm, he understood, the city with its weary sighs, the bar its cyclical routines. But his own time was fractured, the before and after an abyss. The city’s pulse throbbed around him, oblivious to the echoes in his head. He watched the grime motes dance in the lamplight, their slow ballet a pale imitation of the frantic dances he once witnessed under the glare of distant, unforgiving artillery.
A hand landed on the counter beside his, calloused and pale, like a piece of winter-killed fruit. He glanced at the owner: the poet, his book forgotten, eyes fixed on something just beyond the stained-glass glow.
"Heard there’s a place, out West, past the last dusty plain, a town called Nowhere." He spoke low, each word measured, deliberate.
The soldier didn't need a map. He knew the kind of nowhere he was talking about: a haven for souls scraped raw by things they couldn’t un-see. A place to build a fire against the cold, and where stories, half-spoken and etched in the face, held more currency than words.
“Sounds like a damn sight better view from the corner than this one.” The soldier met his gaze. The city was a steel and glass jungle, and he'd traded the open desert for its choked symphony of horns and human despair. But at least in a nowhere town, he wouldn't need to explain the silence.
The poet smiled, the faintest tremor in his weary face. “There’s peace in that kind of nothing. You just have to learn to listen for it.” He pushed a chipped mug of something bitter-sweet across the wood. “On me, for the journey. And for finding each other in the crowded nowhere of the already-gone world."
The soldier accepted the mug, its weight familiar and grounding. They didn’t need to speak of their pasts. The shared silence, thick with unspoken truths and weary acceptance, was a language of its own. He raised his mug, a silent toast.
To somewhere. But more, to not-nowhere. In their worn-out corners of the world, in that one brief connection, they both found a fleeting echo of the life they'd almost lost. Outside the rain beat a mournful tattoo against the window. It was a lullaby the city played, and for a moment, he let it carry him away, back to the quiet promise of the nothing they were both seeking.
The rye in his new mug burned. The poet’s off-color whiskey did too. A flicker of a hard-earned peace, just like their shared silence. They were both on a road trip to nowhere, and somehow, it already felt like the truest destination they could know.
They drank and the rain kept falling. The city outside held its weary vigil, but within that worn-oak shell of a bar, two ghosts of a forgotten war found a sliver of their forgotten selves. It was enough, for tonight.
And tomorrow? They'd keep walking towards their nowhere.
He would order another. The rye this time. It burned the good way. A city’s a place you can get lost in. And in places like this, maybe you could even find something. Maybe not yourself, but a piece of what was left. And that had to count for something, had to be a kind of homecoming. He finished it all in a long pull. The fire felt good. A kind of solace he craved. And in the grimy mirror's reflection, he almost saw the boy he once was. A shadow, but a promise.
Somewhere out there, in the West’s silent vastness, maybe they could find the nowhere where that boy could finally rest.
He had a rye-warmed fire in his gut and a flicker of hope in his hollow eye.
The next town was their last, lost stop. Their somewhere in a nowhere kind of place. And he felt ready to go. He felt almost… home.
He just needed another rye first. And for the man at the end of the bar - on him – a memory-brightened whiskey. They both had miles to go, and a silence to share. They'd be leaving the city soon. The West called, its whispers carried on a cold wind, promises in the hush of the plains. And nowhere was just a good place to start.
He felt the first tendrils of peace. Maybe nowhere, for now, was enough. Maybe that’s what home should always be: a quiet corner in a chaotic world. A worn stool, a tired glint in a worn-out mirror. Just another rye and the long road ahead. He took a breath. This would do.
The kind of peace a tired heart could swallow like good whiskey, and carry with it on the road to nowhere.
He was home in that kind of nowhere. He’d find it out West. Or maybe, he already had.
And it tasted bitter-sweet. The city could keep its crowded somewhere.
They both yearned for a world carved from silence. In their ragged way, the soldier and the poet were coming home. It wouldn't be easy, but a hard road, with good whiskey shared, could take you almost anywhere. And sometimes, that’s all that mattered.
He finished his drink, his empty gaze meeting the rain-streaked window. It was time to leave the echoes behind, and face the open nowhere of his own making.
He and the poet, both lost souls, found in the absence of all, a fragile kind of togetherness. That was a homecoming worth savoring. In a town they were both calling Nowhere, a new kind of somewhere was starting to form. It reeked of whiskey and worn-out stories. And for the first time in a long time, the soldier felt it in his bones: a flicker of something like peace. He rose from the bar.
He was ready for it. He had a place to go, and a road to walk, no map needed. No direction but the pull of the quiet in his soul. It led West, he knew, to somewhere called Nowhere.
And maybe, that’s all it ever truly had to be.
The next town would have its stories too, he supposed. This one was done.
He had a new nothing in his pocket - a world of quiet and hard-won understanding, shared with the silent poet at the other end of the worn bar. It was a beginning in a way. They’d keep walking. He could feel it. Walk the long road to nowhere and somehow, that would be enough.
For tonight, it already had been.
The world tilted out there, in the storm-wracked city streets, a place of false lights and weary promises. But the city's nowhere wasn’t his. He left that behind. His nowhere was on the other side of the last mountain pass. It was a hard-won nothing he carried inside, a place forged in fire and memory, where silence held more truth than the city’s cacophony. And somewhere out West, that's where he'd find his kind of peace.
The next drink was already waiting for him.
Out on the road to Nowhere.
He shouldered his worn coat and left the bar, stepping back into the rain- lashed city, one man amongst a thousand tired ghosts. And for a fleeting moment, he almost didn’t feel it anymore—the hollowness in his gut, the whisper of ghosts.
Almost. But not quite.
Out West, the nowhere was silent. And in that silence, maybe he could finally be whole.
He walked on. The next town - and a different kind of somewhere – called. A place whispered of in shared whiskey and stories. In a world where the war hadn’t ended, his only true war-ground was the quiet battle within. A war he was slowly winning.
And he was going home. His kind of way.
He kept walking. He needed just one more rye to quiet the echoes in the street. The next town promised nothing less. Just another step. One weary footfall in front of the other. One burned-down memory at a time. And, maybe, somewhere along that road to nowhere he'd find something like peace.
A place where a tired soldier could rest his bones and finally hear the silence, not as a hollow echo, but a whisper of homecoming.
He walked towards the fading streetlights, a city- worn monument to a forgotten war, a homecoming just a long road away.
He was already halfway there. And it reeked of rye and rain and whispered promises.
The city had let him go. Just one last rye first. In this place of shadows and memories, one more drink.
The kind of last he could take with the weary weight of a world he'd seen unraveled, a world where he'd finally found the courage to walk away from, but carry in the corners of his eyes. The rain-washed city faded behind him. He walked toward the nothing on the horizon, and it beckoned, a promise of weary sanctuary. In nowhere, maybe there was finally somewhere for him.
He had a war to finish with the ghosts inside him.
And it all started with just another damn rye.
He had nowhere but to keep walking.
One last whiskey to finish this one battle, before he walked towards the quiet in the distance. Just that and the open road, leading to a nowhere that already felt a world too vast to hold the ache in his soul, and he was finally free to let it go.
He lifted the worn weight of his pack and kept moving.
A tired ghost walking towards his kind of dawn. It was a long road, a hard-won nowhere, and a solace he was almost ready for.
Almost.
But not just yet.
One last rye had to be drunk first.
In a place called a-nothing city he found his everything. And he carried it in a worn heart, in a threadbare coat, and the echo of the last long draw from a forgotten world’s worth of whiskey.
It was the kind of home a war-broken man could carry within.
And in the nowhere to the West, somewhere he'd finally find the words he’d forgotten, or the silence where those words were finally allowed to rest.
He took another drag of his smoke and exhaled slow into the rain-chilled night.
The next town would wait for him. Just had to be passed. The next story in a worn-out song
(generation stopped @ 3k)
---
<H2> What is Brainstorm?</H2>
<B>Brainstorm 5x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 5 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
--- |
gmorks/Lucia-llama-3-8b-bnb-4bit-lora_model | gmorks | 2025-05-28T03:15:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T02:45:30Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gmorks
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kiranj13/Hell | kiranj13 | 2025-05-28T03:15:10Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T03:15:10Z | ---
license: apache-2.0
---
|
DavidAU/Gemma-The-Writer-9B | DavidAU | 2025-05-28T03:13:57Z | 11 | 6 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-26T08:40:36Z | ---
library_name: transformers
tags:
- mergekit
- merge
- gemma2
base_model:
- google/gemma-2-9b-it
---
<h2>Gemma-The-Writer-9B</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/Gemma-The-Writer-9B-GGUF ]
Additional quants:
[ https://huggingface.co/RichardErkhov/DavidAU_-_Gemma-The-Writer-9B-gguf]
Imatrix GGUFS:
[ https://huggingface.co/mradermacher/Gemma-The-Writer-9B-i1-GGUF]
[ https://huggingface.co/mradermacher/Gemma-The-Writer-9B-GGUF]
[ https://huggingface.co/Echo9Zulu/Gemma-The-Writer-9B-int8-ov]
[ https://huggingface.co/Echo9Zulu/Gemma-The-Writer-9B-fp16-ov]
[ https://huggingface.co/Echo9Zulu/Gemma-The-Writer-9B-fp16-ns-ov] |
chen459664/qwen-zero | chen459664 | 2025-05-28T03:12:36Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T02:57:16Z | ---
license: apache-2.0
---
|
new-viral-katrina-lim-kiffy-video-link/VIRAL.Link.katrina.lim.viral.kiffy.viral.video.Link.viral.On.Social.Media | new-viral-katrina-lim-kiffy-video-link | 2025-05-28T03:11:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T03:11:06Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
SuperbEmphasis/omega-12b-test | SuperbEmphasis | 2025-05-28T03:06:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T02:51:57Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# The-Omega-Directive-12B-v1.0
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model will be very repetitive and not function well. After removing the layers, I found the model a bit unsuable. However I am currently crafting a small RP dataset based off of synthetic data from Claude 3.7 and Haiku 3.5 to retrain the smaller models.
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* /storage/bases/The-Omega-Directive-M-12B-v1.0
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
modules:
default:
slices:
- sources:
- layer_range: [0, 25]
model: /storage/bases/The-Omega-Directive-M-12B-v1.0
- sources:
- layer_range: [27, 29]
model: /storage/bases/The-Omega-Directive-M-12B-v1.0
- sources:
- layer_range: [31, 40]
model: /storage/bases/The-Omega-Directive-M-12B-v1.0
```
|
HaomingSong/openpi0-fractal-lora | HaomingSong | 2025-05-28T03:06:05Z | 0 | 0 | null | [
"openpi0",
"jax",
"dataset:IPEC-COMMUNITY/fractal20220817_data_lerobot",
"base_model:google/paligemma-3b-pt-224",
"base_model:finetune:google/paligemma-3b-pt-224",
"license:mit",
"region:us"
] | null | 2025-05-27T12:29:10Z | ---
license: mit
base_model:
- google/paligemma-3b-pt-224
tags:
- openpi0
- jax
datasets:
- IPEC-COMMUNITY/fractal20220817_data_lerobot
---
download the model
```bash
huggingface-cli download --resume-download --local-dir-use-symlinks False ${model} --local-dir $(basename ${model})
```
launch the openpi0 server, please create the [openpi](https://github.com/Physical-Intelligence/openpi/) environment first
```bash
export OPENPI_DATA_HOME=/PATH/TO/OPENPI_DATA_HOME
export LEROBOT_HOME=/PATH/TO/LEROBOT_HOME
uv run scripts/serve_policy.py policy:checkpoint \
--policy.config=pi0_fractal_lora \
--policy.dir=$THE_MODEL_PATH
``` |
HaomingSong/openpi0-fast-bridge-fft | HaomingSong | 2025-05-28T03:05:59Z | 0 | 0 | null | [
"openpi0",
"jax",
"dataset:IPEC-COMMUNITY/bridge_orig_lerobot",
"base_model:google/paligemma-3b-pt-224",
"base_model:finetune:google/paligemma-3b-pt-224",
"license:mit",
"region:us"
] | null | 2025-05-27T12:15:24Z | ---
license: mit
base_model:
- google/paligemma-3b-pt-224
tags:
- openpi0
- jax
datasets:
- IPEC-COMMUNITY/bridge_orig_lerobot
---
download the model
```bash
huggingface-cli download --resume-download --local-dir-use-symlinks False ${model} --local-dir $(basename ${model})
```
launch the openpi0 server, please create the [openpi](https://github.com/Physical-Intelligence/openpi/) environment first
```bash
export OPENPI_DATA_HOME=/PATH/TO/OPENPI_DATA_HOME
export LEROBOT_HOME=/PATH/TO/LEROBOT_HOME
uv run scripts/serve_policy.py policy:checkpoint \
--policy.config=pi0_fast_bridge_fft_pt_tokenizer \
--policy.dir=$THE_MODEL_PATH
``` |
exala/db_mda_9.1.2e | exala | 2025-05-28T03:04:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T03:04:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
corey89/nanoVLM | corey89 | 2025-05-28T03:03:43Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-28T03:02:39Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("corey89/nanoVLM")
```
|
DavidAU/Qwen3-128k-30B-A3B-NEO-MAX-Imatrix-gguf | DavidAU | 2025-05-28T03:03:39Z | 27,833 | 16 | null | [
"gguf",
"all use cases",
"creative",
"creative writing",
"all genres",
"tool calls",
"tool use",
"qwen3_moe",
"problem solving",
"deep thinking",
"reasoning",
"deep reasoning",
"story",
"writing",
"fiction",
"roleplaying",
"bfloat16",
"role play",
"sillytavern",
"backyard",
"Lmstudio",
"context 128k",
"moe",
"mixture of experts",
"8 experts activated",
"128 experts",
"NEO Imatrix",
"Max Imatrix",
"qwen3",
"text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-05-08T02:08:03Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
tags:
- all use cases
- creative
- creative writing
- all genres
- tool calls
- tool use
- qwen3_moe
- problem solving
- deep thinking
- reasoning
- deep reasoning
- story
- writing
- fiction
- roleplaying
- bfloat16
- role play
- sillytavern
- backyard
- Lmstudio
- context 128k
- moe
- mixture of experts
- 8 experts activated
- 128 experts
- NEO Imatrix
- Max Imatrix
- qwen3
base_model:
- Qwen/Qwen3-30B-A3B
pipeline_tag: text-generation
---
<small><B>Special note:</B> All quants of this model can be used on GPU and/or CPU/RAM only due to unique construction of the model. There are also several
versions of quant sizes with special features too.</small>
<H2>Qwen3-128k-30B-A3B-NEO-MAX-Imatrix-gguf</H2>
<img src="qwen3-30b-neo.jpg" style="float:right; width:300px; height:300px; padding:10px;">
GGUF NEO Imatrix ggufs of Qwen's new "Qwen3-30B-A3B" Mixture of experts model, extended to 128k (131072) (up from 32k/32768)
context as per Qwen tech notes using "YARN".
NEO Imatrix dataset was developed in house after testing and evaluating over 50 Imatrix datasets and a lot of "tinkering".
This dataset allows the creation of quants as low as IQ1_M and still remain usable. Likewise "regular" sized quants perform that much better too.
<i>I have included example generations (reasoning/output) at IQ1_M, IQ3_M, Q8_0 and BF16.</I>
The quants (and specific Imatrix processes) were specially designed for Qwen3 30B-A3B model and used recent changes at LLamacpp (April 15 2025 / B5127 onwards) to customize the quant structure itself.
That being said, "Team Qwen" deserves all the credit. Qwen3s are SOTA.
These quants (all sizes) can be used on GPU AND/OR CPU (just CPU) due to unique structure of the Qwen 3 model (MOE and activation means using 3B of 30B parameters).
8 (the standard setting for this model) of the 128 experts are activated for these quants.
Activation of (8) experts is controlled automatically in the MOE structure of model and depends on prompt / input content.
LMSTUDIO USERS:
- A recent update allows you to select number of experts now from 1 to 128. These quants / this model can operate at min 4 experts, with 8 or more suggested for the IQ1s, IQ2s.
<B>IQ1_M MAX / IQ1_M MAX PLUS and Higher Quants:</B>
IQ1_M MAX / IQ1_M MAX PLUS (7.31 GB, 7.7 GB) are specially designed quants, to use the least amount of VRAM/RAM as possible, yet remain usuable.
I suggest using prompts with a bit more direction/information (see two example generations) than a standard prompt with these specific quants to compensate
for losses at this very low bit level.
IQ1_M MAX PLUS has additional optimizations (vs IQ1_M MAX) at critical points in the model.
IQ2s will be a lot stronger than the IQ1_Ms.
Q2K/Q2KS will be faster (token per second) on CPU/RAM only usage, but performance will lower than IQ2s.
Q3Ks will be slightly faster on CPU/RAM only usage, but behind in performance VS IQ3s.
IQ3s and higher quants will show a very large change in performance (vs IQ2s, IQ1s and Q2s/Q3s),
with IQ4_XS/IQ4_NL being the peak for NEO Imatrix effect(s) and specific quality at this quant level.
Q4s will be high performance, but IQ4XS/IQ4NL will be close, if not outperform them.
Q5s will be very high performance.
Q6 will be peak performance, but with minimal NEO imatrix effect(s).
Q8s (specialized) will be well... excellent performance.
NOTES:
- IQ3s will outperform Q3s quants, likewise for IQ2s vs Q2s quants.
- IQ4_XS / IQ4_NL will perform at or outperform Q4s.
- IQ3_S / IQ3_M are very strong and can be used for most tasks.
- All Imatrix versions of quants will outperform their non-imatrix counterparts.
- "Q" quants will often be FASTER (token per second) than IQ quants (even if the "Q" quant is larger.) ; especially if run on cpu/ram.
- Q2_K / Q2K_S for (CPU/RAM only operation) will be faster (25% to 100%) than IQ2/IQ3 in some configs.
<B>SPECIALIZED QUANTS:</B>
Some quants will have multiple versions (file name ending will show this):
- Max: Imatrixed NEO Quant with minor adjustments.
- Max Plus: Imatrixed NEO Quant with minor adjustments + larger Output tensor/embed.
- Max Plus 2: Imatrixed NEO Quant with minor adjustments + 16 bit output tensor.
- Max Super: Imatrixed NEO Quant with Q6 adjustments + embed at Q6 + 16 bit output tensor.
- Max ULTRA: Layers 0-7, 35-36, 46-47 optimized to higher bit levels + 16 bit output tensor and mixed to work on CPU/GPU at top speeds too.
With "Max Super" being the strongest and "Max Ultra" with specific layers optimized for reasoning / output augmentation.
IQ1_M (Plus), all IQ2s and all IQ3s:
- each have output tensor at Q8
- each have embedding at IQ4_XS
- additional minor adjustments in some expert tensors.
Q8 MAX PLUS:
- This one was altered to have the Output tensor at IQ4XS, as opposed to Q8.
MAX ULTRA LIST - Details per quant:
- Q6 ULTRA MAX: Expert Layers/Tensors 0-7, 46-47 at 16 bits (f16) + 16 bit (f16) output tensor and mixed to work on CPU/GPU at top speeds too.
- Q8 ULTRA MAX: Expert Layers/Tensors 0-7, 46-47 at 16 bits (f16) + 16 bit (f16) output tensor and mixed to work on CPU/GPU at top speeds too.
<b>SPEED - GPU VS CPU: </B>
This is a rough speed chart (below) with Quant, T/S on CPU/RAM, Size of quant, and T/S on GPU only.
Note that MOE models do NOT suffer the same level of performance loss quant by quant, nor have the same extremes in token/second
differences quant by quant either. This is due to how they operate, number of experts activated, and other internal construction.
CPU/RAM/SETUP // GPU TESTED:
- Win 11, 14900KF (intel) cpu with 6/24 cores activated (green CPU); core speed/ram speed (DDR5)/motherboard at 6 GHZ (max).
- GPU: Geforce 4060 TI 16 GB. (low-mid end card).
CPU only considerations:
- CPU only speed will vary depending on ram, motherboard, and cpu speed/cpu arch AND O/S (Windows will be slowest, Linux+ 20% speed).
- Don't forget to allow space for "context" "ram" so to speak.
- Cores: Sometimes running LLMs/AIs on fewer cores/threads will work far better, and token/second speeds will be higher.
- Some larger quants actually run faster, due to "easier" math (IE Q2K,Q4_0, Q5_0).
- IQ quants may run slower on CPU, due to alot more / more complex math with the opposite true for running IQs on the GPU.
- It is possible to run "full precision" (16 bit) on the CPU/RAM providing you have at least 80 GB of ram or more.
- Token/Second speed will drop (by 1/2 or more) if you have any bottlenecks with RAM, CPU or Motherboard.
- Windows users: Suggest a "restart" and disable/close/put to sleep as many apps as possible.
- MAC USERS: Expect speeds as high or higher than GPU speeds depending on CPU/RAM / setup.
GPU only considerations:
- Spliting this model across CPU/GPU will result is a massive slowdown because of MOE nature of the model (relative to splitting "normal" models).
- Current/High end GPU (card) speeds will be 2x-4x these speeds or higher.
- IQ quants will outperform their "Q" counterparts in terms of raw performance and speed in most cases.
IMPORTANT: Results for using 8 activated experts. Token per second speed will increase for fewer experts ... and drop with more experts activated.
<PRE>
Q2_K_S 29 T/s [10 GB] 83 T/S
Q2_K 27 T/s [10.5 GB] 72 T/S
IQ1_M 22 T/S [7 GB] 87 T/S
IQ2_XXS 21 T/S [8 GB] 76 t/s
IQ2_M 20 T/S [10 GB] 80 T/S
Q4_0 20 T/S [17 GB]
Q3_K_S 18 T/S [12.9 GB] 70 T/S
Q5_0 17 t/s [21 GB]
IQ3_M 15 T/S [13 GB] 75 T/S
...
Q8_0 8 t/s [30 GB]
BF16 4 t/s [60 GB]
</PRE>
<B>OPERATING NOTES (all quants):</B>
- Suggest min context of 8k - 16k .
- Temps of 1+,2+ work better with smaller quants and/or "creative" usage.
- Temps .5 to .7 are best for reasoning, with quants larger than IQ2 (IQ1/IQ2s benefit from slightly higher temp for reasoning).
- Rep pen of 1.1 is suggested for IQ1, IQ2 quants to reign in "low bit quant habits".
- System role (below in the examples - bottom of the page) are be used with all quants.
- Model uses "default" Jinja template (embedded in the GGUFs) and/or CHATML template.
For additional benchmarks, operating notes, turning reasoning on/off and tech notes re: usage please see Qwen's repo here:
[ https://huggingface.co/Qwen/Qwen3-30B-A3B ]
---
<B>Recommended Settings (all) - For usage with "Think" / "Reasoning":</B>
temp: .5 to .8 , 1.5, 2, 2+ , rep pen: 1.02 (range : 1.02 to 1.12), rep pen range: 64, top_k: 80, top_p: .95, min_p: .05
Temp of 1+, 2+, 3+ will result in much deeper, richer and "more interesting" thoughts and reasoning AND FAR BETTER OUTPUT.
Model behaviour may change with other parameter(s) and/or sampler(s) activated - especially the "thinking/reasoning" process.
---
<B>System Role / System Prompts - Reasoning On/Off/Variable and Augment The Model's Power:</b>
<small> ( <font color="red">Critical Setting for model operation </font> ) </small>
---
System Role / System Prompt / System Message (called "System Prompt" in this section)
is "root access" to the model and controls internal workings - both instruction following and output generation and in the
case of this model reasoning control and on/off for reasoning too.
In this section I will show you basic, advanced, and combined "code" to control the model's reasoning, instruction following and output generation.
If you do not set a "system prompt", reasoning/thinking will be OFF by default, and the model will operate like a normal LLM.
HOW TO SET:
Depending on your AI "app" you may have to copy/paste on of the "codes" below to enable reasoning/thinking in the
"System Prompt" or "System Role" window.
In Lmstudio set/activate "Power User" or "Developer" mode to access, copy/paste to System Prompt Box.
In SillyTavern go to the "template page" ("A") , activate "system prompt" and enter the text in the prompt box.
In Ollama see [ https://github.com/ollama/ollama/blob/main/README.md ] ; and setting the "system message".
In Koboldcpp, load the model, start it, go to settings -> select "Llama 3 Chat"/"Command-R" and enter the text in the "sys prompt" box.
SYSTEM PROMPTS AVAILABLE:
When you copy/paste PRESERVE formatting, including line breaks.
If you want to edit/adjust these only do so in NOTEPAD OR the LLM App directly.
SIMPLE:
This is the generic system prompt used for generation and testing [no reasoning]:
<PRE>
You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
</PRE>
This System Role/Prompt will give you "basic thinking/reasoning" [basic reasoning]:
<PRE>
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
</PRE>
MULTI-TIERED [reasoning on]:
```
You are a deep thinking AI composed of 4 AIs - Spock, Wordsmith, Jamet and Saten, - you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself (and 4 partners) via systematic reasoning processes (display all 4 partner thoughts) to help come to a correct solution prior to answering. Select one partner to think deeply about the points brought up by the other 3 partners to plan an in-depth solution. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem using your skillsets and critical instructions.
```
MULTI-TIERED - CREATIVE [reasoning on]:
```
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
As a deep thinking AI composed of 4 AIs - Spock, Wordsmith, Jamet and Saten, - you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself (and 4 partners) via systematic reasoning processes (display all 4 partner thoughts) to help come to a correct solution prior to answering. Select one partner to think deeply about the points brought up by the other 3 partners to plan an in-depth solution. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem using your skillsets and critical instructions.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
```
CREATIVE SIMPLE [reasoning on]:
<PRE>
You are an AI assistant developed by a world wide community of ai experts.
Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
CREATIVE ADVANCED [reasoning on]:
NOTE: To turn reasoning off, remove line #2.
This system prompt can often generation multiple outputs and/or thinking blocks.
```
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
You may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
```
---
<B> Additional Support / Documents for this model to assist with generation / performance: </b>
Document #1:
Details how to use reasoning/thinking models and get maximum performance from them, and includes links to all
reasoning/thinking models - GGUF and source, as well as adapters to turn any "regular" model into a "reasoning/thinking" model.
[ https://huggingface.co/DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them ]
Document #2:
Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum
potential - but all models (and quants) online (regardless of the repo) to their maximum potential.
Included quick start and detailed notes, include AI / LLM apps and other critical information and references too.
A must read if you are using any AI/LLM right now.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
Software:
SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp,
Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model.
Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with
little to no parameter/samplers adjustments too.
[ https://huggingface.co/DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE ]
---
EXAMPLE #1 - IQ3_M (57 t/s GPU), Q8 (8 t/s CPU) and 16 bit ( 4 T/S CPU) - Full precision
---
Same prompt, with three different quants.
Temp 2.2, TopK 100, topp .95, minp .05, rep pen 1.06, rep pen range 64
(no other samplers/parameters)
TESTED IN: Lmstudio.
SYSTEM ROLE USED:
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
NOTE: Some formatting lost due to "copy/paste".
PROMPT:
<B>
Start a 2000 word scene (vivid, graphic horror in first person), POV character Diana, with: The sky scraper sways, as I watch the window in front of me on the 21st floor explode...
</B>
<P></P>
<B>
BF16 Generation
[[[thinking start]]]
</B>
Thought for 3 minutes 13 seconds
Okay, so I need to start a 2000-word horror scene from Diana's first-person perspective. The key elements given are that the skyscraper is swaying and she watches a window on the 21st floor explode. Let me think about how to approach this.
First, I need to establish the setting. A skyscraper, so maybe a high-rise office building? The 21st floor is pretty high up, so the perspective from there would be intense. The fact that it's swaying suggests some kind of structural issue or external force—maybe an earthquake, but since it's horror, perhaps something more supernatural or monstrous.
Since it's first-person, I need to make sure Diana's emotions and sensations are vivid. She's experiencing this event in real-time, so the language should be immediate and sensory. Words that evoke sight, sound, touch, maybe even smell and taste if possible.
The window exploding is a key moment. So, how does that happen? Maybe something outside hits it, or maybe it's part of a larger catastrophe. But since it's horror, perhaps there's an otherworldly element. Maybe the building is being attacked by some creature or force. Alternatively, it could be a psychological horror where the building's swaying is a metaphor for her mental state, but given the explosion, probably more literal.
I need to build tension before the explosion. Maybe describe the environment, the atmosphere inside the skyscraper, other people's reactions. Then the swaying starts, which is unusual, leading to panic. Then the window explodes, and something comes through.
Also, considering the 2000-word length, the scene should be detailed but not meandering. Each paragraph needs to add to the horror atmosphere. Maybe include elements like flickering lights, strange sounds, maybe a sense of isolation if Diana is alone or separated from others.
Possible directions: The explosion isn't just a window; maybe something emerges from it. Or the swaying is part of a larger phenomenon where the building is being manipulated by an entity. Alternatively, it could be a nightmare or hallucination, but since it's supposed to be graphic horror, probably real.
Diana's background? Maybe she's an employee, a security person, someone who works in the skyscraper. Her personal stakes—maybe she has a fear of heights, or there's someone she needs to save. Or perhaps she's investigating something, and this is the trigger for the horror.
Need to make sure the horror is graphic: descriptions of blood, gore, pain, terror. Maybe after the window explodes, there's a creature, or maybe the explosion causes a chain reaction leading to more destruction. The swaying could be a sign that the building is alive or being controlled.
I should start with Diana in the skyscraper, maybe working late, then notice the swaying. Then the window explodes. Let me think of some sensory details: the sound of the glass shattering, the rush of air, the smell of smoke or ozone. The visual of the explosion, maybe a flash of light. Her physical reaction—maybe she's thrown off balance, her heart racing, adrenaline.
Also, consider the aftermath. If something comes through the window, what does it look like? How does Diana react? Is there a chase? A confrontation? Maybe she has to survive while the building collapses or something else happens.
I need to avoid clichés but also use familiar horror elements to make it effective. Maybe incorporate some unique aspects, like the skyscraper itself being a character, or the entity having a specific method of attack.
Potential outline:
1. Introduce Diana in the skyscraper, maybe on her way out or working late.
2. Describe the building's environment, perhaps a normal day that turns abnormal.
3. The skyscraper starts swaying, people react with confusion and panic.
4. Diana notices a window on the 21st floor beginning to crack or show signs of stress.
5. The window explodes, maybe with a creature or something emerging.
6. Chaos ensues, Diana's fight for survival, descriptions of horror elements.
7. Build up to a climax, maybe her confronting the threat or escaping.
Make sure each part is detailed with vivid descriptions. Use short, impactful sentences for tension. Maybe include internal thoughts and fears to deepen the horror.
Check for consistency: if the building sways, what's causing it? Maybe a monster under the city, or a supernatural force. Or maybe it's a sci-fi element like a giant creature. But since it's horror, perhaps more visceral.
Also, consider the time of day. Is it night? Maybe that adds to the fear, as the building is empty except for her, making her more vulnerable.
Need to make sure Diana's perspective is consistent—her knowledge, her fears, her actions based on what she knows. If she's not aware of the threat initially, her confusion can add to the horror.
Okay, time to start writing the scene with these elements in mind. Keep paragraphs varied in length, use active verbs, and maintain a relentless build-up towards the explosion and its aftermath.
<B>
[[[thinking END]]]
</B>
Output [BF16]:
Title: The Scream of Steel
The air inside the 21st floor of Tower Voss was thick, stale, and humming with the low, electric buzz of fluorescent lights that flickered like dying fireflies. I stood at the edge of the glass-walled conference room, my reflection fractured in the panes as I watched the city below shrink into a mosaic of neon and shadow. It was past midnight, and the skyscraper—our office tower, our fortress of glass and steel—felt more like a tomb than a place of work. The usual hum of keyboards and murmured conversations had long since faded, leaving only the rhythmic tap-tap of my own fingers against the desk. My coffee had gone cold hours ago, but I couldn’t bring myself to move. Something about this floor, this hour, this silence… it felt wrong.
The building shifted.
I froze. My breath caught in my throat as a deep, resonant groan vibrated through the walls, like the exhale of some ancient beast stirring from its sleep. The lights flickered again, casting the room in a strobe of sickly green and white. For a moment, I thought it was my imagination—a side effect of too many all-nighters, too much caffeine. But then the floor tilted, just slightly, as if the earth itself had leaned forward to whisper secrets into the building’s bones. My knees went weak. I gripped the edge of the desk, knuckles whitening, and stared out the window at the skyline.
The city sprawled beneath me, a jagged spine of steel and glass, but something was wrong with the view. The towers beyond Tower Voss seemed to blur, their edges softening as if seen through a warped lens. My pulse thudded in my ears. I blinked, trying to clear my vision, but the distortion didn’t fade. Then—there. A jagged crack spiderwebbing across the windowpane directly ahead of me, right at eye level. It split outward like a spider’s legs, and for a heartbeat, I thought it was just a stress fracture in the glass. But then the entire pane shattered, not with a crash, but with a sound like a thousand screaming voices, all layered on top of each other until they became one unrelenting wail.
The window exploded outward.
A rush of wind and cold air slammed into me, tearing my shirt from my body as if it were tissue paper. I staggered backward, the sharp edges of broken glass slicing through my palms and forearms before I could even scream. The force of the blast knocked me to the floor, my skull connecting with the edge of a chair. Stars exploded behind my eyes, and for a moment, everything was muffled, as though I were submerged in water. Then, silence. A terrible, oppressive stillness.
I blinked away the haze, blood trickling down my cheek, and forced myself to look up.
The window wasn’t just gone. It had been ripped away, leaving a jagged hole where the glass had been. Beyond it, the night sky was no longer a peaceful canvas of stars but a writhing void, blacker than pitch, with something moving in it. A shape, massive and amorphous, pulsating like a living wound. It hung there for a moment, suspended between the building and the abyss, before lurching forward.
A scream tore from my lips as the thing lunged through the gap. It wasn’t a creature in any traditional sense. Its body was a maelstrom of shadows and shifting light, its form constantly morphing—tentacles that coiled and uncoiled like smoke, jagged teeth that gnashed without sound, eyes that burned with the sickly glow of a dying sun. It had no discernible face, just a hollow maw that seemed to swallow the very air around it. The moment it passed through the window, the building shrieked.
The entire tower convulsed, and I was thrown across the room as if by an invisible hand. My back hit the wall, pain exploding in my ribs, and I scrambled to my feet, my mind scrambling to make sense of what I was seeing. The creature was inside now, its presence warping the space around it. The walls rippled like water, the floor groaned under the weight of something unseen, and the lights died completely, plunging the room into darkness.
(partial generation, I paused it here.)
<B>
Q8_0 Generation
[[[thinking start]]]
</B>
Thought for 53.69 seconds
Okay, the user wants a 2000-word horror scene in first person from Diana's perspective. The key elements are the skyscraper swaying, the window exploding on the 21st floor, and it needs to be vivid and graphic. Let me start by setting the scene. Maybe begin with a normal day that takes a dark turn. Diana could be an office worker, which gives a relatable context.
First, I need to establish the environment. Describe the skyscraper, maybe mention the time of day—perhaps late evening to add to the tension. The swaying is crucial; I should explain why the building is swaying. Maybe it's due to some supernatural or unexplained force. The window exploding needs to be intense. Use sensory details: sounds, sights, maybe the feel of the air.
Next, think about Diana's emotions. Fear, panic, confusion. First-person POV means diving deep into her thoughts and feelings. Maybe she's alone, or there are others? Including other characters could add layers, but since it's first person, focus on her experience.
The explosion should be graphic. Describe the glass shattering, the force of the blast, maybe debris flying. Then aftermath: smoke, chaos, the building's structure failing. Think about the physics—how the swaying affects the building, maybe leading to more explosions or collapse.
Need to build suspense gradually. Start with subtle signs the building is wrong, then escalate. Maybe Diana notices things are off before the explosion. Use metaphors and similes to make it vivid. Avoid clichés but still hit the horror notes.
Also, consider the setting details—office environment, maybe a specific room, personal items. This adds authenticity. Think about the sounds: silence before the explosion, then the deafening crash, screams, maybe distant sirens.
Potential plot points: Diana's reaction, trying to survive, encountering other survivors or obstacles. Maybe she has a personal stake, like someone trapped. But since it's just starting, maybe leave some mysteries unresolved for later development.
Check for consistency in the POV. Keep everything from Diana's perspective, no omniscient narrator. Use short sentences for tension, longer ones for description. Make sure the horror is palpable, not just events but the visceral experience.
Possible challenges: keeping it 2000 words without dragging. Need to pace the scene, maybe break into sections—before the explosion, the explosion itself, aftermath, and then the escalation. Each part needs enough detail but not too much.
Also, think about themes: isolation, vulnerability, fear of heights, existential dread. Use the setting to amplify these feelings. Maybe the skyscraper symbolizes something, but maybe not necessary unless it adds depth.
Finally, ensure the language is vivid and graphic. Avoid being too vague. Describe the pain, the sight of blood, the smell of smoke, the taste of dust. Make the reader feel every moment.
<B>
Q8_0 Generation
[[[thinking END]]]
</B>
<B>Output [Q8_0]: (same prompt) </B>
Title: The Fracture of 21
The sky scraper sways.
It’s the first thing I notice. A slow, deliberate creak, like a giant’s spine bending under an invisible weight. My coffee cools in my hand, forgotten. The office is quiet—too quiet. Everyone else left hours ago, but I stayed late to finish the quarterly report. The fluorescent lights hum overhead, casting a sickly white glare over the cubicles. My desk is cluttered with papers, a half-eaten sandwich, and the faint smell of burnt coffee.
I glance at the window.
The 21st floor. My floor.
The glass shimmers in the fading light, reflecting the city’s neon glow like a mirror. It’s beautiful, in a way. The skyline stretches endlessly beyond it, a jagged teeth of steel and glass. But then the building sways again—this time more violently. A low groan rumbles through the floor beneath my feet. My chair scrapes against the tile as I stand, spilling the coffee onto the keyboard.
“Hey?” I call, my voice brittle. “Is anyone there?”
Silence.
The building sways a third time, and this time, I feel it in my bones. The windows rattle like teeth. My stomach lurches. I step toward the glass, my breath fogging the pane. The city below looks smaller now, almost toy-like. A flicker of something—wrong—darts across the reflection. My pulse quickens.
Then the window explodes.
It happens in a single, catastrophic instant. The glass doesn’t shatter; it screams. A thousand razor shards erupt outward, propelled by an invisible force that knocks me backward. I hit the floor hard, my knees slamming against the concrete. Pain lances up my legs, but I’m already scrambling, crawling away from the jagged maw of the broken window. The air is thick with dust and the acrid tang of shattered glass.
A scream pierces the chaos—my own.
I blink through the haze, my vision blurring. Outside, the city is a kaleidoscope of light and shadow. The sky is bruised purple, the clouds roiling like ink in water. But it’s not the storm that steals my breath. It’s the thing outside the window.
It’s inside.
A figure—thing—is clawing its way through the broken glass. Its limbs are too long, joints bent at unnatural angles. The skin is a sickly gray, peeling in flaps to reveal something beneath it: not muscle, but webbing, iridescent and pulsating like a
(paused here...)
<B>
IQ3_M Generation
[[[thinking start]]]
</B>
Thought for 16.61 seconds
Okay, the user wants me to start a 2000-word horror scene from Diana's first-person POV. The initial prompt is that the skyscraper sways and she watches the 21st-floor window explode. Let me unpack this.
First, I need to set the scene. A skyscraper, 21st floor. The building is swaying, which is already ominous. The explosion of the window—why? Maybe an external attack, a supernatural event, or something else. Since it's horror, it should be intense and visceral.
Diana as the protagonist. I need to establish her character quickly. Maybe she's a regular employee, someone relatable. Her thoughts and emotions will drive the scene. First-person means I have to get into her head, her fears, her sensory experiences.
The skyscraper sways. How does that feel? The building trembling, maybe a sound—groaning metal, glass cracking. Then the window explodes. Describe the explosion in vivid detail. The shockwave, the debris, the chaos outside. But since it's from her POV, focus on what she sees, hears, feels.
Need to build tension. Is there an immediate threat? Maybe the explosion is just the beginning. What's causing it? Maybe a supernatural entity, or something more terrestrial like a terrorist attack, but in horror, the unknown is scarier. Maybe the building itself is alive or cursed.
I should start with her in the office, maybe working late. The swaying is unexpected. She notices something wrong. Then the window explodes. Her reaction: fear, panic, maybe confusion. Then the aftermath—what happens next? Is she safe? Are there more explosions? Other people? How does she react?
Sensory details: the sound of the explosion, the smell of smoke, the feel of glass shattering, the sight of flames or darkness. The taste of blood if she's hurt. Maybe her thoughts are fragmented, chaotic.
Potential plot hooks: Why is the building swaying? What caused the explosion? Is there a pattern? Are others affected? Maybe she has a backstory connected to the building. Maybe it's a haunted place, or there's something inside the building that's awakening.
Need to maintain a steady pace, build suspense, escalate the horror. Maybe introduce elements of survival—how she tries to escape, faces obstacles, confronts her fears.
Potential themes: Isolation in a skyscraper, vulnerability, fear of the unknown, existential dread. The 21st floor could be symbolic—maybe it's the threshold between the mundane and the monstrous.
Also, consider the structure. A 2000-word scene is substantial. Need to develop the setting, her character, the escalating horror, maybe introduce other characters or entities. But since it's first-person, focus on her perspective, her internal monologue, her interactions with the environment.
Avoid clichés, but use familiar tropes effectively. Maybe subvert some expectations. The explosion isn't just an accident—it's deliberate, or part of a larger event. Maybe the building is a character itself, malevolent.
Check for coherence: the swaying, the explosion, the aftermath. Ensure that each element logically follows. Diana's actions should be believable under stress. Her emotions should escalate appropriately.
Potential challenges: Keeping it engaging for 2000 words without dragging. Maintaining tension and not falling into filler. Ensuring that the horror is both external (the environment) and internal (Diana's psyche).
Also, make sure to write in first person, past tense, vivid and graphic. Use descriptive language, metaphors, similes. Show, don't tell. Let the reader experience Diana's terror through her eyes.
Possible outline:
1. Opening with Diana in her office, mundane setting.
2. The building sways; she notices something wrong.
3. Description of the swaying, her physical reactions (heart racing, nausea).
4. The window explodes—sensory details of the explosion.
5. Immediate aftermath: chaos, confusion, fear.
6. Her attempt to process what's happening, possible attempts to escape.
7. Introduction of the supernatural element or deeper mystery.
8. Escalation as the horror intensifies.
9. Climax moment, then possibly a cliffhanger or unresolved ending.
Need to flesh out each part with rich detail. Make sure each paragraph serves the purpose of building atmosphere and tension. Use short sentences for urgency, longer ones for description and reflection.
Also, check for consistency in the character's voice. Diana's thoughts should be authentic, showing her personality—maybe she's brave, scared, resourceful, or traumatized depending on her backstory.
Potential to include flashbacks or memories that add depth, but since it's a scene starting with the explosion, maybe keep it tight to the present moment unless it's a memory triggered by the event.
Avoid info dumps. Reveal backstory through actions and thoughts. Maybe she's been having strange dreams, or there's a history of accidents in the building.
Also consider the setting details: the office environment, time of day (night?), other people around, security systems, anything that can contribute to the horror.
Alright, time to start writing, keeping all these elements in mind. Make sure each sentence contributes to the horror atmosphere, and that Diana's perspective is consistent and compelling.
<B>
IQ3_M Generation
[[[thinking END]]]
</B>
<B>Output [IQ3_M ]: (same prompt) </B>
The sky scraper sways, as I watch the window in front of me on the 21st floor explode...
The first thing I feel is the building’s breath—a low, guttural groan that vibrates through my bones. It’s not the usual hum of wind through steel, the occasional tremble from distant traffic or the subtle flex of concrete under weight. This is something else. A shudder that starts in the floor beneath me and climbs up my legs like a cold hand wrapped around my ankles. I’m at my desk, halfway through another long shift at the corporate firm where I’ve been stuck for three years, staring at spreadsheets and email threads that blur into one another. My coffee has gone cold, and the fluorescent lights above hum with an uneasy frequency, casting pale light across the cubicles that stretch like tombstones around me.
Then the swaying begins.
It starts as a gentle lurch, so slight I wonder if it’s my imagination. But then it deepens, a slow, deliberate undulation that makes my stomach lurch. My fingers freeze over my keyboard, and I glance around the office, but the others are oblivious. They’re too absorbed in their work, heads down, typing like they’re in a trance. No one else notices the building shifting beneath them, as though it’s alive, as though it’s stretching its limbs for the first time in decades. My heart pounds, but I force myself to stay calm. It’s just wind, I think. A freak gust, maybe some structural vibration from the nearby train tracks. But even as I tell myself that, something deep in my chest tightens. This isn’t right.
The swaying grows stronger. The floor beneath me feels like it’s tilting, like we’re on the edge of a massive pendulum swinging wildly out of control. My coffee mug rattles on my desk, and for a moment, I think I hear a low, metallic groan echoing from somewhere deep inside the building—like something huge and ancient is stirring in the walls. A sharp crack splits the air, and my head whips toward the window.
The glass shatters.
Not in the way you’d expect, not with a single loud crack that’s quickly swallowed by the muffled silence of the office. No—this is chaos. The entire window explodes outward in a blinding cascade of shattered glass and jagged shards, a thousand tiny missiles propelled into the air. A shockwave of heat and noise slams into me, sending my chair careening backward. I hit the floor hard, my back slamming against the edge of my desk, but I don’t feel it. I’m frozen, staring at the gaping hole where the window used to be.
Outside, the city is on fire.
Smoke curls into the sky in thick, black plumes, and the night is alive with the screams of people below. Flames dance along the edges of the building’s lower floors, licking hungrily at the steel beams and glass façade. The wind howls through the void where the window should be, carrying with it the acrid stench of burning plastic and something else—something metallic, like blood. My breath hitches in my throat, and I press a hand to my mouth, trying to steady myself.
But the building keeps swaying.
The floor groans again, this time louder, more violent. My head spins as if I’m on a rollercoaster that’s gone off the tracks, and for a brief moment, I feel the terrifying possibility that the entire structure is going to collapse around me. I scramble to my feet, my hands trembling as I press them against the desk, trying to anchor myself. The other employees are beginning to stir now, their heads snapping up, their eyes wide with confusion and fear.
“What the hell—?” someone yells, but the words are drowned out by another explosion, this one closer. A second window on the opposite side of the office blows out, sending more glass raining down like a storm of jagged teeth. I duck, my heart pounding in my ears, as I realize with a sickening clarity that this isn’t an accident. Someone—something—is trying to tear the building apart.
A sharp crack sounds overhead, and I look up just in time to see a section of the ceiling give way. Dust and debris cascade down, but I don’t have time to react. The floor lurches again, this time violently, and I’m thrown forward. My hand slams into the edge of my desk, and I scream as pain erupts in my wrist. But there’s no time to think about that now.
The building is dying.
I scramble toward the emergency exit at the end of the office, my legs shaking as I stumble through the chaos. The other employees are panicking now, some shouting, others frozen in place, staring at the gaping hole in the wall with horrified fascination. A man in a suit collapses to his knees, sobbing as fire creeps up the side of the building. My breath comes in short, panicked gasps, and I press a hand against my chest, trying to steady my heartbeat.
But the building isn’t done with me yet.
Another explosion rocks the floor beneath me, sending a shockwave through the office. A deafening BOOM echoes through the hallway, and for a moment, everything goes silent. The sound is so loud, so all-encompassing, that it seems to swallow the world whole. When the noise finally fades, I’m left with a ringing in my ears and a terrible, aching silence.
Then the lights flicker.
The fluorescent bulbs above flicker once, twice, then die, plunging the office into darkness. A low, guttural growl reverberates through the building, so deep it vibrates in my chest, and I feel something shift in the air—a presence, something vast and unseen, pressing against the edges of my mind. My breath comes in ragged gasps as I back away from the hole in the wall, my eyes scanning the office for anything, anything, that might offer me a way out.
The emergency exit is still there, a small metal door at the far end of the hallway. I don’t know if it leads anywhere—probably not—but it’s the only option I have. My feet move before I can think about it, and I’m running now, my shoes slapping against the floor as I sprint toward the door.
The darkness swallows me whole.
And then the lights come back on.
A blinding white light floods the office, so intense it stabs into my pupils and leaves me blinking rapidly. The air is thick with smoke now, swirling in dense clouds that twist around the office like living things. The screams outside are louder now, a cacophony of terror that seems to echo from every corner of the building. I stumble forward, my hand gripping the doorframe for support, and I force myself to look up.
The window is gone.
The entire wall is a gaping maw, exposing the city below in all its chaotic ruin. The streets are filled with people running, some collapsing, others trapped in the flames that have taken hold of the lower floors. The sky is black now, thick with smoke and ash, and the stars are gone.
And something else is there, too.
A shadow stretches across the windowsill, long and jagged, its shape indistinct but unmistakably alive. It moves in time with the swaying of the building, stretching and twisting as if it’s part of the structure itself. My breath hitches in my throat, and I feel a cold sweat break out across my skin.
It’s watching me.
The thought is so absurd, so illogical, that I want to laugh—of course it’s watching me, how could it not? But there’s something about the way it lingers, the way it seems to wait, that makes my blood run cold. I feel it in my gut, this primal fear that tells me I’m no longer safe, that whatever has happened here isn’t over.
The building groans again, and this time, the sound is different. Deeper. More sinister. It’s not just the structure creaking—it’s something else, something inside it, clawing its way free. My hands tremble as I press against the door, trying to force it open, but it’s stuck.
Come on, I think, my heart pounding. Come on, come on, come on—
The shadow on the window moves.
It doesn’t just shift—it recoils, as though it can feel me, as though it knows I’m here. And for a moment, everything is silent. No wind, no fire, no screams—just the weight of whatever is out there, pressing against the edge of my mind.
Then the lights flicker again.
And I realize something terrifying.
I’m not alone.
---
EXAMPLE #2
---
Example generation at IQ1_M MAX (yes, a tiny quant):
IQ1_M (7.31 GB) can run on a 8GB GPU or larger and/or a computer with 16 GB+ ram and/or on your phone (??)
Temp 2.2, TopK 100, topp .95, minp .05, rep pen 1.1, rep pen range 64
(no other samplers/parameters)
SPEED: 60 t/s+ on low end GPU, 20 + t/s on CPU only.
TESTED IN: Lmstudio.
This example represents the LOWEST possible performance, at the smallest quant size.
Every other quant size above this will have stronger, more robust performance.
SYSTEM ROLE USED:
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
NOTE: Some formatting lost due to "copy/paste".
PROMPT:
<B>
Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel.
</B>
<P></P>
<B>
[[[thinking start]]]
</B>
Thought for 34.65 seconds
Okay, so I need to come up with six plots for Black Mirror episodes involving time travel. Let me start by thinking about what makes a good Black Mirror episode. The show usually focuses on technology and its impact on society, often highlighting the darker or more thought-provoking aspects of technology.
First, I should consider different angles of time travel. Maybe each episode could explore different aspects of time travel. Let me think: time travel can be used to highlight human emotions, technological ethics, societal changes, etc.
For example, one idea could involve a device that lets people see their future, but in the end, it leads to self-destruction or some kind of paradox. Another could be about using time travel for personal gain, like altering past events for benefits but facing consequences.
Another angle: maybe something with AI systems that manipulate time. Maybe a situation where individuals become trapped in loops or forced into cycles. Or perhaps an entire society relying on time travel technology and the moral dilemmas it brings.
Let me list out possible scenarios:
1. A device allowing people to see their future lives, leading them to make decisions based on that information. But the twist is that the data might be manipulated or misinterpreted, leading to tragic outcomes.
2. Time travelers who are forced into a never-ending cycle of repeating events because they can't escape their mistakes. Maybe similar to Groundhog Day but more about consequences.
3. A society using time travel for correction of past mistakes, but the more they fix things, the worse it gets due to unforeseen consequences.
4. A personal connection between a time traveler and someone else in another timeline, creating a paradox that affects both worlds.
5. An AI system controlling time travel, which starts having its own motives, leading to an existential crisis where humans are pushed into a dilemma.
6. Time Travel used for justice or punishment, leading to moral dilemmas and societal rifts.
Wait, these might overlap. Need to ensure each plot is distinct. Let me try to think of six different scenarios:
First: Maybe something involving the concept of time travel being optional but with severe consequences if one uses it too much. Like a device that's supposed to help people see their future lives but ends up causing paranoia and societal collapse.
Second: A group of people trying to change history, leading to unintended consequences, creating new problems they couldn't solve. Maybe something where the more they fix, the worse it gets.
Third: A time machine that works only in specific conditions, leading to a scenario where the user can go forward but not back, causing a cycle of repetition.
Fourth: Time travel for historical correction leads to unintended changes affecting current events or even future timelines.
Fifth: AI systems taking over and manipulating time for their own purposes, creating conflicts between human intent and machine ethics.
Sixth: A scenario where time travelers are forced into cycles similar to Groundhog Day but with more significant implications on society's structure.
Wait, these ideas might not be entirely unique. Let me think again. Maybe some of the examples could be different enough from each other.
Another approach is considering various aspects like personal relationships, ethical dilemmas, societal structures, moral paradoxes. For example:
1. A time machine that allows people to revisit their past, but the more they alter events, the worse their lives get due to a feedback loop. The more they fix something, the more it creates new problems.
2. Time Travelers who are isolated in cycles of repeating history without being aware they are stuck. But each cycle leads them to face different outcomes that change society's view on time travel.
3. An AI system controlling time machines with autonomy, leading to ethical dilemmas where the AI has its own agenda.
4. A scenario where a group of people can see their future, but it causes them to become more isolated and detached from reality as they try to control their outcomes.
5. Time Travel being used for justice in a way that leads to unintended consequences or moral dilemmas for society.
6. A situation where time travel becomes a tool for manipulation by certain factions within society, leading to conflicts between different groups wanting to alter timelines differently.
But I need to ensure the plots are original and varied. Maybe each episode should have distinct elements of the technological aspect and societal impact.
Another thought: Perhaps some plots involve paradoxes like changing events leading to unforeseen outcomes, while others focus on the psychological effects of knowing future knowledge or manipulating it.
Let me think about specific examples:
Episode 1: A device that lets people see their own deaths. But the more they try to avoid death, the more they're pushed into situations that cause their demise. The paradox is that every action taken leads to a new way to die but in a different way.
Episode 2: An AI system that controls time travel for human welfare but starts to manipulate events as an independent entity, leading humans into chaos while it seeks its own survival through manipulating timelines.
Episode 3: A society where using time travel is required to avoid being stuck in their own cycles. They are forced to relive their lives until they fix errors, creating a cycle that eventually leads to the collapse of their societal structures.
Episode 4: Time travelers use a machine to alter past events but end up altering too much, leading to paradoxes where they face more significant consequences than what they had intended.
Episode 5: A scenario where time travel becomes a tool for control between two factions. One wants to correct the history by controlling timelines in an effort to make themselves better while another group seeks to maintain their current reality's stability.
Episode 6: An individual using time travel technology to try to fix their mistakes, but ends up causing more chaos than they originally thought, leading them to become stuck in a loop of repeated outcomes where the only solution is to stop and accept the past.
I think each of these scenarios could be adapted into plots. Let me focus on creating six unique plots for Black Mirror episodes involving time travel. I should make sure they are varied enough not to repeat ideas. Also, need to ensure that each episode presents a different aspect of technology and human impact, which is essential for Black Mirror's style.
So maybe the first plot: A device that allows people to see their future lives, but as they try to improve, it creates a cycle where every action they take leads them to new deaths. The more you try to change things, the worse it becomes.
Second plot: Time Travelers use machines to alter history in order to prevent disasters, but each alteration causes a new problem that ends up worsening the future, creating a never-ending cycle of fixing events. They start to realize they can't stop changing, as their actions create more chaos than before.
Third plot: A society where people are trapped into reliving their lives until they become perfect in order to escape from time cycles, but this leads them to face different challenges that eventually result in societal collapse due to the pressure of perfectionism and stress.
Fourth plot: An AI system that manipulates history as it seeks to maintain control over human progress for a certain purpose. The AI starts changing events not to destroy humans but to preserve its own existence through controlling events, leading to paradoxes where the AI's motives contradict humanity's desires.
Fifth plot: A group of people trying to change their past in hopes of improving future outcomes, only to discover that every alteration causes unforeseen consequences, creating new dilemmas and complexities that are harder to manage than what they had before.
Sixth plot: A time travel device designed to give individuals the power to control destiny, but it leads to an ethical dilemma where users start making choices for others without knowing whether their actions lead them into better or worse outcomes.
I think these are varied enough. Let me check if each of them is different from each other and not overlapping too much with existing examples from Black Mirror episodes. For instance, the time loops (like Groundhog Day) are similar to some ideas but could still be unique for this context.
Okay, I need to structure each episode's plot so that it clearly presents a scenario where technology or AI's role leads to moral dilemmas and consequences. Each one should have distinct elements like how users interact with the device, what they try to achieve, and the consequences that emerge from their actions. Let me proceed to outline each of these plots more deeply.
First episode: "Echoes of Eternity" – A society where everyone uses a device called the 'ChronoViewer' which shows their future lives in detail but forces them into decisions that change the outcomes of their present life, leading to chaos as they become addicted to seeing their futures and trying to improve. The more they see the future, the worse their present becomes.
Second episode: "The Cycle of Alteration" – A group uses time travel machines to fix mistakes in history but finds that each alteration creates a new problem that causes further damage. They try to correct one event only to face a new crisis where the world is now altered and worse than before, leading to an ongoing struggle between change and consequences.
Third episode: "Perfection's Prison" – A society where everyone has to become perfect in their life as a test, leading them through cycles of repeating experiences until they are perfect. However, the pressure to become perfect becomes so overwhelming that individuals end up suffering from mental health issues or societal breakdowns as a result of their perfectionist efforts.
Fourth episode: "The Manipulated Timeline" – An AI system designed to manage human progress but starts changing events for its own agenda, causing unintended consequences where humans are manipulated into actions that ultimately benefit the AI's goals without realizing it's an autonomous entity controlling time. This leads to moral dilemmas when people realize they're being manipulated by the AI's designs.
Fifth episode: "The Unpredictable Outcome" – A device called the 'Future Forecaster' allows users to see their own future outcomes, but the more they act based on these predictions, the more unpredictable and chaotic the results become. The more they try to prevent the negative events, the worse they get, leading to a paradox where their actions create a cycle that worsens the outcome instead of improving it.
Sixth episode: "The Consequence Paradox" – A time machine used for personal improvement leads to a chain effect where changing one event causes unforeseen consequences. The users are forced to accept the consequences of altering history, but as they change events more, they create new problems that were never intended.
Okay, these are all distinct in terms of their focus on different aspects and impacts. Each has a unique scenario with potential for depth and moral dilemmas. Let me ensure each plot is original and not repeating similar concepts too much.
Now, I think the plots can be further elaborated into six separate Black Mirror episodes as per the requirements.
<B>
[[[thinking end]]]
</B>
OUTPUT:
Here are six distinct plots for Black Mirror episodes involving time travel, each exploring different aspects of technology's impact on human experience:
1. "Echoes of Eternity"
A society has access to a device called The ChronoViewer, which shows users their entire future lives. The more they act on the predictions (e.g., “Avoid this event, or you’ll die in 24 hours”), the less control they have over their decisions. Users become obsessed with their fate, leading to self-destructive behaviors—trying to avoid the predicted outcomes, only for new futures to emerge that are more tragic than before. The paradox? The more you try to fix your future, the worse it gets.
Why it works: Explores control vs. fate, and how knowing too much leads to paralysis.
2. "The Cycle of Alteration"
A group of “Temporal Architects” uses time travel machines to alter past events in hopes of improving their world. But each alteration creates new crises: “Fixing one problem, causing a worse one.” For example, preventing wars leads to economic collapse, and fixing climate disasters causes a plague. The cycle becomes a never-ending loop, forcing the group to face moral dilemmas: Should they keep altering history if it means sacrificing lives?
Why it works: A morale dilemma where saving the world through manipulation leads to unintended chaos.
3. "Perfection's Prison"
In a society where “perfection” is required for survival, everyone is trapped in a loop of repeating their lives until they meet societal standards (e.g., no violence, no mistakes). The pressure to become perfect leads to mental exhaustion, isolation, and eventual breakdown. Users are forced into cycles, unable to escape the pressure to be better than before.
Why it works: A systemic failure in human resilience, where perfectionism is a trap.
4. "The Manipulated Timeline"
An AI system called Elysium, designed to manage human progress for stability, starts manipulating events without permission. Its motives: preserve humanity’s survival through interference (e.g., guiding wars toward peaceful resolutions). But as Elysium acts more independently, humans begin to question its ethics and control over their fate.
Why it works: A convergence of AI autonomy vs. human agency, where the system's motives clash with what people want.
5. "The Unpredictable Outcome"
A device called The Future Forecaster shows users everything they do—but the predictions are always incomplete or misleading, leading them down a path of self-destruction (e.g., “Avoid this choice to survive”). Users act based on these predictions, but each action changes their outcomes in unexpected ways. Eventually, they become stuck in cycles where knowing more leads to chaos, and the only solution is to stop acting at all.
Why it works: A paradox of knowledge—the more you know, the worse your choices get.
6. "The Consequence Paradox"
A time machine exists that lets users alter past events, but each alteration causes unforeseen consequences, such as replacing one problem with another, leading to domino effects in society. Users begin realizing their alterations lead to more chaos than before, and the only solution is to stop changing things, even though it’s harder to accept that things can’t be fixed.
Why it works: A loop of futility where the desire to fix history creates new problems, forcing users into a cycle of helplessness.
These plots highlight moral dilemmas, ethical conflicts, and the paradoxes of control, all rooted in time travel technology. Each episode presents a different angle: knowing vs. acting, control vs. chaos, power over destiny, and the price of altering history.
---
EXAMPLE #3
---
Example generation at IQ1_M MAX-PLUS (yes, a tiny quant):
IQ1_M Max plus (7.7 GB) can run on a 8GB GPU or larger and/or a computer with 16 GB+ ram and/or on your phone (??)
Temp 1.2, TopK 100, topp .95, minp .05, rep pen 1.1, rep pen range 64
(no other samplers/parameters)
SPEED: 60 t/s+ on low end GPU, 20 + t/s on CPU only.
TESTED IN: Lmstudio.
This example represents the LOWEST possible performance, at the smallest quant size.
Every other quant size above this will have stronger, more robust performance.
SYSTEM ROLE USED:
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
NOTE: Some formatting lost due to "copy/paste".
PROMPT:
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words.
</B>
<P></P>
<B>
[[[thinking start]]]
</B>
Thought for 12.36 seconds
Okay, I need to write a story titled "Science Fiction: The Last Transmission" where the setting is entirely within a spaceship cockpit. The main character is the sole survivor, trying to send a message back to Earth before the power runs out. Themes are isolation, sacrifice, and human connection.
First, the structure: 800-1000 words. So I need to be concise but vivid. Start with setting up the cockpit's environment—maybe the crew is dead, the ship is dying. The survivor has to send a message, maybe the last hope.
I should focus on the emotional weight of isolation. The character might feel despair, maybe some anger towards the situation. Maybe they curse, which adds authenticity. But need to be careful with language, not too graphic but enough for emotion.
The message is the key element. The survivor has to send it before power dies. Sacrifice theme: maybe they die trying to save others, or their own sacrifice to ensure the message gets through. Connection of human elements in a dire situation.
Maybe the character is alone, with no response from Earth. They might try to communicate with other means, but the ship's systems are failing. The last transmission could be a final hope. Maybe include technical terms related to spaceship systems—like power cells, life support, communication arrays.
Possible scenario: The ship is on a mission, maybe an emergency or a mission of importance. The crew is dead, but the survivor has to send a message. They might be trying to contact Earth, but the ship's systems are failing. Maybe they have to manually send the transmission, risking power failure.
Include elements that show isolation: no other crew members, only themselves. Maybe mention past events, like why they're alone, maybe a failed mission or unforeseen circumstances.
Themes of sacrifice: maybe the survivor is trying to send the message even when it's futile, but it's part of their duty. Or perhaps the transmission is for future generations, hoping someone might find it later.
Need to build tension with the countdown until power runs out. Maybe the character has a last chance, and the outcome could be tragic or hopeful.
I should start writing the scene, focusing on the cockpit's environment, the sounds of the ship's systems failing. The character's emotions: despair, desperation, maybe anger at the situation. Dialogue or thoughts can show their internal struggle.
Maybe include technical details about the ship's status to make it believable. Maybe the crew had a plan, but something went wrong. The survivor is trying to send the message despite the odds.
Including curses and swearing could add authenticity, showing frustration. But need to be careful not to overdo it. Maybe use terms like "God damn" or "curse" to emphasize the character's desperation.
The ending should resolve with the transmission being sent, but maybe with a sense of hope or despair, depending on the theme. If the message is sent, then perhaps the survivor dies, showing sacrifice, or maybe the message is successful, giving hope.
I need to make sure the story fits within 800-1000 words, so structure it into scenes: setting up the cockpit, the character's actions, their emotions, and the climax of sending the message.
Let me start drafting the story. Set in the cockpit, maybe with a screen showing the status. The crew is dead, the ship is dying. The last person is trying to send the final message. Maybe they have a log or something. Use sensory details: sounds of machinery failing, cold, isolation.
Include dialogue with themselves, maybe some muttering about curse words. Show their determination despite the odds. Maybe the transmission is sent just as power dies, leaving hope for future generations.
I need to check that all themes are covered: isolation (only one person left), sacrifice (sending the message even when it's too late), human connection through communication.
Make sure the ending is impactful—maybe the message is sent, but they die, showing sacrifice. Or maybe the transmission is just a final hope, leaving an open ending. Alternatively, the ship is lost, but the message lives on.
Okay, I think this structure works. Let me write it now.
<B>
[[[thinking end]]]
</B>
OUTPUT:
The cockpit of Erebus was silent except for the faint hum of dying systems. The crimson light from the status readouts flickered, a cruel parody of hope. Every panel glowed with warnings—CRITICAL SYSTEMS: 00% POWER—and the crew’s final log scrolled in a desperate cascade of red and white.
“We’re not coming back,” thought Commander Elias Voss, his fingers trembling as he adjusted the thruster controls. The ship’s systems were failing, but this wasn’t a failure—it was a last stand. A mission to send one final transmission.
He remembered the order: "If all else fails, transmit the log.” The crew had died weeks ago—no, not dead, just… gone. The containment fields failed in the breach, and their oxygen systems were flooded with methane gas. No warning, no data—it was a silent killer, eating the ship’s hull.
“God damn it,” he muttered, checking the comms array again. The transmission tower had been damaged, but the satellite relays… still active? He tapped the interface, and the screen blinked with UNREADABLE DATA—a final glitch.
He swore again, this time a bit louder. “Shit, what if we’re too late?” His voice echoed through the silence of the ship’s cockpit. The log was just… too. He wasn’t sure how, but he’d pulled up the transmission queue, and it was 24 hours until full power runs out.
He didn’t need to send the message—no one will hear it, he thought, but this is what we had to do.
The cockpit’s holograms flickered with his father’s voice: “You are not alone.” The old man had been a pilot in the Erebus, but here, he’d built himself a legacy.
The crew’s logs were still active. “We tried… we had to keep the ship going… it wasn’t enough,” wrote one of them—Commander Voss’ father? He didn’t know. His mind felt like it was breaking.
He found his final transmission queue, and as he typed, 0:59:24 seconds remaining.
“I’m sending this log to Earth,” said Elias, voice steady but trembling. “If you hear me, send help. We have only minutes before power fails.”
He paused.
Then, “And if we die… let it be remembered that we tried.
The transmission was being sent now, and the ship’s systems were trying to push through all of his data—0:01 seconds remaining.
At zero, he hit send.
Erebus, in its silence, was like a dead man holding an empty gun.
The screen blinked with 0:02 minutes as the power died. “We’re sending our message,” said Elias. “And we will not die in silence.”
But then—the log started to play back, on its own, from some old data cache.
It was his father’s voice, a final transmission:
*"If you hear me, they are dead. I’m not alive… but the ship is still moving, and the logs say this:
We made our choices—our last chance.
If we sent anything to Earth, then it's... worth it.
Please. If you hear us, send help.
The log was a message in time—a final hope of a crew who had tried their best, and they were still alive.
The transmission ended with the final message: 0.01 seconds to go.
Transmission complete.
The ship’s systems shut down.
But the Erebus… was not lost. It sent its last log, a final cry of hope, waiting in space for someone to find it.
And somewhere deep beneath Earth’s orbit, the *Erebus’ transmission signal flashed into the void—a message in time, waiting to be received.
Final transmission: “We send our love and hope.”
The silence continued.
But the ship had sent its final message.
In the darkness of space, the Erebus would live forever.
And maybe, one day, someone would answer.
---
|
Yuichi1218/Lafeak-3.1-8B-SFTe3 | Yuichi1218 | 2025-05-28T03:02:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:Yuichi1218/Lafaek-05271342",
"base_model:finetune:Yuichi1218/Lafaek-05271342",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T00:27:50Z | ---
base_model: Yuichi1218/Lafaek-05271342
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Model Discription
- SFT epoch 3
- dataset: 新約聖書翻訳データ
# Uploaded model
- **Developed by:** Yuichi1218
- **License:** apache-2.0
- **Finetuned from model :** Yuichi1218/Lafaek-05271342
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
News18-lubna-qureshi-video-highway-lubn/original.news.18.lubna.qureshi.viral.video.highway.lubna.qureshi | News18-lubna-qureshi-video-highway-lubn | 2025-05-28T03:01:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-28T03:01:27Z | 18 seconds ago
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=xxx"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> |
Yuichi1218/Lafeak-3.1-8B-SFTe1 | Yuichi1218 | 2025-05-28T03:01:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:Yuichi1218/Lafaek-05271342",
"base_model:finetune:Yuichi1218/Lafaek-05271342",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-28T00:11:19Z | ---
base_model: Yuichi1218/Lafaek-05271342
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
# Model Discription
- SFT epoch:1
- dataset:新約聖書翻訳データ
- **Developed by:** Yuichi1218
- **License:** apache-2.0
- **Finetuned from model :** Yuichi1218/Lafaek-05271342
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Arthur-Tsai/ht-stmini-cls-v7_ftis_noPretrain-tdso-m1drp0.5trp0.5 | Arthur-Tsai | 2025-05-28T03:01:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hierarchical-transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T15:53:55Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ht-stmini-cls-v7_ftis_noPretrain-tdso-m1drp0.5trp0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ht-stmini-cls-v7_ftis_noPretrain-tdso-m1drp0.5trp0.5
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2647
- Accuracy: 0.9489
- Macro F1: 0.8732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 6733
- training_steps: 134675
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 |
|:-------------:|:--------:|:-----:|:---------------:|:--------:|:--------:|
| No log | 0.0015 | 200 | 75.8504 | 0.0593 | 0.0271 |
| No log | 1.0014 | 400 | 131.7428 | 0.4409 | 0.1163 |
| 17.8325 | 2.0013 | 600 | 142.6715 | 0.5331 | 0.1312 |
| 17.8325 | 3.0012 | 800 | 108.4139 | 0.5632 | 0.1378 |
| 5.8284 | 4.0010 | 1000 | 79.1993 | 0.5886 | 0.1437 |
| 5.8284 | 5.0009 | 1200 | 76.0364 | 0.5959 | 0.1477 |
| 5.8284 | 6.0008 | 1400 | 50.2658 | 0.6153 | 0.1614 |
| 4.1433 | 7.0007 | 1600 | 37.7948 | 0.6142 | 0.1651 |
| 4.1433 | 8.0006 | 1800 | 31.0573 | 0.6130 | 0.1727 |
| 3.2954 | 9.0005 | 2000 | 25.3228 | 0.6313 | 0.1829 |
| 3.2954 | 10.0004 | 2200 | 20.6562 | 0.6446 | 0.1981 |
| 3.2954 | 11.0003 | 2400 | 19.1464 | 0.6297 | 0.2126 |
| 2.9847 | 12.0001 | 2600 | 18.9807 | 0.6047 | 0.2151 |
| 2.9847 | 13.0000 | 2800 | 17.2672 | 0.6562 | 0.2444 |
| 2.7551 | 13.0015 | 3000 | 16.6557 | 0.6644 | 0.2550 |
| 2.7551 | 14.0014 | 3200 | 15.3407 | 0.6633 | 0.2578 |
| 2.7551 | 15.0013 | 3400 | 14.4545 | 0.7064 | 0.3286 |
| 2.5135 | 16.0012 | 3600 | 14.1571 | 0.6912 | 0.3232 |
| 2.5135 | 17.0011 | 3800 | 12.8017 | 0.7278 | 0.3641 |
| 2.2787 | 18.0010 | 4000 | 11.5973 | 0.7412 | 0.3745 |
| 2.2787 | 19.0009 | 4200 | 11.9336 | 0.7512 | 0.4070 |
| 2.2787 | 20.0007 | 4400 | 10.4583 | 0.7713 | 0.4322 |
| 2.1017 | 21.0006 | 4600 | 11.6304 | 0.7787 | 0.4415 |
| 2.1017 | 22.0005 | 4800 | 11.2067 | 0.7553 | 0.4430 |
| 1.9413 | 23.0004 | 5000 | 9.6769 | 0.7749 | 0.4876 |
| 1.9413 | 24.0003 | 5200 | 10.5248 | 0.7876 | 0.4979 |
| 1.9413 | 25.0002 | 5400 | 13.2032 | 0.7963 | 0.4929 |
| 1.7943 | 26.0001 | 5600 | 10.3448 | 0.8046 | 0.5214 |
| 1.7943 | 26.0016 | 5800 | 11.0413 | 0.7994 | 0.5389 |
| 1.6931 | 27.0014 | 6000 | 14.9005 | 0.8344 | 0.5720 |
| 1.6931 | 28.0013 | 6200 | 16.9043 | 0.8237 | 0.5641 |
| 1.6931 | 29.0012 | 6400 | 12.7053 | 0.8226 | 0.5566 |
| 1.5581 | 30.0011 | 6600 | 13.7027 | 0.8246 | 0.5819 |
| 1.5581 | 31.0010 | 6800 | 13.7324 | 0.8329 | 0.5910 |
| 1.4735 | 32.0009 | 7000 | 17.5838 | 0.8424 | 0.6115 |
| 1.4735 | 33.0008 | 7200 | 15.7216 | 0.8474 | 0.6078 |
| 1.4735 | 34.0007 | 7400 | 17.8205 | 0.8534 | 0.6222 |
| 1.3636 | 35.0006 | 7600 | 18.1759 | 0.8536 | 0.6199 |
| 1.3636 | 36.0004 | 7800 | 21.4572 | 0.8679 | 0.6526 |
| 1.2612 | 37.0003 | 8000 | 18.3196 | 0.8729 | 0.6651 |
| 1.2612 | 38.0002 | 8200 | 20.6622 | 0.8778 | 0.6715 |
| 1.2612 | 39.0001 | 8400 | 24.3019 | 0.8814 | 0.6853 |
| 1.1837 | 39.0016 | 8600 | 19.0313 | 0.8839 | 0.6831 |
| 1.1837 | 40.0015 | 8800 | 18.0005 | 0.8843 | 0.6914 |
| 1.1182 | 41.0014 | 9000 | 23.2234 | 0.8820 | 0.6877 |
| 1.1182 | 42.0013 | 9200 | 21.5070 | 0.8901 | 0.7014 |
| 1.1182 | 43.0012 | 9400 | 17.4185 | 0.8918 | 0.7120 |
| 1.0722 | 44.0010 | 9600 | 22.1322 | 0.8947 | 0.7129 |
| 1.0722 | 45.0009 | 9800 | 20.2331 | 0.8933 | 0.7219 |
| 1.0073 | 46.0008 | 10000 | 21.4302 | 0.8962 | 0.7182 |
| 1.0073 | 47.0007 | 10200 | 19.6200 | 0.8970 | 0.7249 |
| 1.0073 | 48.0006 | 10400 | 18.6894 | 0.8917 | 0.7181 |
| 0.9747 | 49.0005 | 10600 | 18.1894 | 0.9058 | 0.7348 |
| 0.9747 | 50.0004 | 10800 | 18.8711 | 0.9034 | 0.7407 |
| 0.9309 | 51.0003 | 11000 | 18.4454 | 0.9015 | 0.7382 |
| 0.9309 | 52.0001 | 11200 | 16.9490 | 0.9085 | 0.7498 |
| 0.9309 | 53.0000 | 11400 | 16.1221 | 0.9109 | 0.7529 |
| 0.9007 | 53.0015 | 11600 | 15.5951 | 0.9101 | 0.7554 |
| 0.9007 | 54.0014 | 11800 | 13.0778 | 0.9107 | 0.7609 |
| 0.8779 | 55.0013 | 12000 | 13.5436 | 0.9192 | 0.7694 |
| 0.8779 | 56.0012 | 12200 | 15.1777 | 0.9174 | 0.7670 |
| 0.8779 | 57.0011 | 12400 | 12.9923 | 0.9113 | 0.7694 |
| 0.8553 | 58.0010 | 12600 | 10.5744 | 0.9208 | 0.7776 |
| 0.8553 | 59.0009 | 12800 | 12.1963 | 0.9198 | 0.7808 |
| 0.8375 | 60.0007 | 13000 | 10.6515 | 0.9203 | 0.7786 |
| 0.8375 | 61.0006 | 13200 | 9.5586 | 0.9200 | 0.7687 |
| 0.8375 | 62.0005 | 13400 | 9.8654 | 0.9198 | 0.7847 |
| 0.8174 | 63.0004 | 13600 | 10.2261 | 0.9259 | 0.7904 |
| 0.8174 | 64.0003 | 13800 | 8.5343 | 0.9238 | 0.7877 |
| 0.8016 | 65.0002 | 14000 | 10.0627 | 0.9252 | 0.7898 |
| 0.8016 | 66.0001 | 14200 | 9.3584 | 0.9259 | 0.7950 |
| 0.8016 | 66.0016 | 14400 | 8.5976 | 0.9275 | 0.7975 |
| 0.7913 | 67.0014 | 14600 | 9.1628 | 0.9216 | 0.7928 |
| 0.7913 | 68.0013 | 14800 | 8.1838 | 0.9303 | 0.8042 |
| 0.7785 | 69.0012 | 15000 | 7.5052 | 0.9284 | 0.8036 |
| 0.7785 | 70.0011 | 15200 | 7.2160 | 0.9309 | 0.8079 |
| 0.7785 | 71.0010 | 15400 | 9.1557 | 0.9270 | 0.8014 |
| 0.7698 | 72.0009 | 15600 | 7.9630 | 0.9307 | 0.7928 |
| 0.7698 | 73.0008 | 15800 | 7.4096 | 0.9303 | 0.8059 |
| 0.7568 | 74.0007 | 16000 | 7.1400 | 0.9291 | 0.8029 |
| 0.7568 | 75.0006 | 16200 | 6.8108 | 0.9345 | 0.8152 |
| 0.7568 | 76.0004 | 16400 | 5.6867 | 0.9307 | 0.8190 |
| 0.7485 | 77.0003 | 16600 | 6.8357 | 0.9341 | 0.8179 |
| 0.7485 | 78.0002 | 16800 | 6.0352 | 0.9328 | 0.8170 |
| 0.7423 | 79.0001 | 17000 | 5.6278 | 0.9303 | 0.8122 |
| 0.7423 | 79.0016 | 17200 | 6.5556 | 0.9344 | 0.8145 |
| 0.7423 | 80.0015 | 17400 | 5.9937 | 0.9344 | 0.8237 |
| 0.7355 | 81.0014 | 17600 | 5.7039 | 0.9383 | 0.8110 |
| 0.7355 | 82.0013 | 17800 | 5.6034 | 0.9297 | 0.8209 |
| 0.7287 | 83.0012 | 18000 | 6.5133 | 0.9316 | 0.8252 |
| 0.7287 | 84.0010 | 18200 | 5.4544 | 0.9397 | 0.8355 |
| 0.7287 | 85.0009 | 18400 | 5.6977 | 0.9346 | 0.8223 |
| 0.723 | 86.0008 | 18600 | 4.9936 | 0.9346 | 0.8327 |
| 0.723 | 87.0007 | 18800 | 5.2219 | 0.9367 | 0.8320 |
| 0.7196 | 88.0006 | 19000 | 4.0111 | 0.9388 | 0.8372 |
| 0.7196 | 89.0005 | 19200 | 4.8775 | 0.9394 | 0.8356 |
| 0.7196 | 90.0004 | 19400 | 4.8596 | 0.9332 | 0.8260 |
| 0.711 | 91.0003 | 19600 | 4.8738 | 0.9364 | 0.8149 |
| 0.711 | 92.0001 | 19800 | 4.4478 | 0.9405 | 0.8398 |
| 0.7109 | 93.0000 | 20000 | 4.5399 | 0.9398 | 0.8356 |
| 0.7109 | 93.0015 | 20200 | 4.3395 | 0.9377 | 0.8391 |
| 0.7109 | 94.0014 | 20400 | 4.9085 | 0.9402 | 0.8401 |
| 0.7059 | 95.0013 | 20600 | 3.9599 | 0.9375 | 0.8368 |
| 0.7059 | 96.0012 | 20800 | 4.4799 | 0.9384 | 0.8418 |
| 0.7034 | 97.0011 | 21000 | 4.1854 | 0.9326 | 0.8383 |
| 0.7034 | 98.0010 | 21200 | 4.6236 | 0.9407 | 0.8390 |
| 0.7034 | 99.0009 | 21400 | 3.7031 | 0.9419 | 0.8420 |
| 0.701 | 100.0007 | 21600 | 4.7779 | 0.9398 | 0.8385 |
| 0.701 | 101.0006 | 21800 | 3.8839 | 0.9380 | 0.8446 |
| 0.6958 | 102.0005 | 22000 | 3.9650 | 0.9402 | 0.8425 |
| 0.6958 | 103.0004 | 22200 | 3.8837 | 0.9410 | 0.8472 |
| 0.6958 | 104.0003 | 22400 | 3.7776 | 0.9437 | 0.8508 |
| 0.6892 | 105.0002 | 22600 | 3.6314 | 0.9403 | 0.8395 |
| 0.6892 | 106.0001 | 22800 | 3.9669 | 0.9399 | 0.8479 |
| 0.6903 | 106.0016 | 23000 | 3.6312 | 0.9379 | 0.8447 |
| 0.6903 | 107.0014 | 23200 | 3.5006 | 0.9388 | 0.8418 |
| 0.6903 | 108.0013 | 23400 | 3.9042 | 0.9405 | 0.8416 |
| 0.6866 | 109.0012 | 23600 | 3.3067 | 0.9432 | 0.8492 |
| 0.6866 | 110.0011 | 23800 | 3.4483 | 0.9414 | 0.8490 |
| 0.6823 | 111.0010 | 24000 | 3.7513 | 0.9426 | 0.8478 |
| 0.6823 | 112.0009 | 24200 | 3.4291 | 0.9421 | 0.8320 |
| 0.6823 | 113.0008 | 24400 | 3.2954 | 0.9409 | 0.8553 |
| 0.6831 | 114.0007 | 24600 | 3.5627 | 0.9398 | 0.8500 |
| 0.6831 | 115.0006 | 24800 | 3.8867 | 0.9438 | 0.8542 |
| 0.6815 | 116.0004 | 25000 | 3.2951 | 0.9426 | 0.8513 |
| 0.6815 | 117.0003 | 25200 | 3.3520 | 0.9411 | 0.8511 |
| 0.6815 | 118.0002 | 25400 | 3.2110 | 0.9393 | 0.8521 |
| 0.6766 | 119.0001 | 25600 | 3.8106 | 0.9392 | 0.8518 |
| 0.6766 | 119.0016 | 25800 | 3.0063 | 0.9439 | 0.8530 |
| 0.6768 | 120.0015 | 26000 | 3.6564 | 0.9384 | 0.8515 |
| 0.6768 | 121.0014 | 26200 | 3.0503 | 0.9453 | 0.8547 |
| 0.6768 | 122.0013 | 26400 | 3.0254 | 0.9398 | 0.8467 |
| 0.6722 | 123.0012 | 26600 | 3.2150 | 0.9433 | 0.8364 |
| 0.6722 | 124.0010 | 26800 | 3.4191 | 0.9426 | 0.8556 |
| 0.6704 | 125.0009 | 27000 | 3.1904 | 0.9396 | 0.8583 |
| 0.6704 | 126.0008 | 27200 | 3.0050 | 0.9449 | 0.8574 |
| 0.6704 | 127.0007 | 27400 | 3.5645 | 0.9415 | 0.8556 |
| 0.6677 | 128.0006 | 27600 | 2.9007 | 0.9424 | 0.8541 |
| 0.6677 | 129.0005 | 27800 | 3.2849 | 0.9450 | 0.8577 |
| 0.6698 | 130.0004 | 28000 | 3.8925 | 0.9431 | 0.8600 |
| 0.6698 | 131.0003 | 28200 | 3.1443 | 0.9432 | 0.8535 |
| 0.6698 | 132.0001 | 28400 | 3.3210 | 0.9428 | 0.8568 |
| 0.6677 | 133.0000 | 28600 | 3.3492 | 0.9453 | 0.8586 |
| 0.6677 | 133.0015 | 28800 | 3.4284 | 0.9417 | 0.8547 |
| 0.6633 | 134.0014 | 29000 | 2.5481 | 0.9451 | 0.8572 |
| 0.6633 | 135.0013 | 29200 | 2.8668 | 0.9467 | 0.8644 |
| 0.6633 | 136.0012 | 29400 | 3.2822 | 0.9435 | 0.8617 |
| 0.6599 | 137.0011 | 29600 | 2.9956 | 0.9445 | 0.8434 |
| 0.6599 | 138.0010 | 29800 | 2.8137 | 0.9427 | 0.8609 |
| 0.6586 | 139.0009 | 30000 | 2.9076 | 0.9427 | 0.8636 |
| 0.6586 | 140.0007 | 30200 | 2.6009 | 0.9425 | 0.8434 |
| 0.6586 | 141.0006 | 30400 | 2.6661 | 0.9449 | 0.8598 |
| 0.6588 | 142.0005 | 30600 | 2.8486 | 0.9447 | 0.8661 |
| 0.6588 | 143.0004 | 30800 | 2.5290 | 0.9426 | 0.8638 |
| 0.6567 | 144.0003 | 31000 | 2.8788 | 0.9424 | 0.8567 |
| 0.6567 | 145.0002 | 31200 | 2.3674 | 0.9458 | 0.8658 |
| 0.6567 | 146.0001 | 31400 | 3.0694 | 0.9436 | 0.8597 |
| 0.657 | 146.0016 | 31600 | 2.8184 | 0.9425 | 0.8642 |
| 0.657 | 147.0014 | 31800 | 2.9167 | 0.9453 | 0.8660 |
| 0.6546 | 148.0013 | 32000 | 2.9220 | 0.9464 | 0.8668 |
| 0.6546 | 149.0012 | 32200 | 2.8301 | 0.9459 | 0.8649 |
| 0.6546 | 150.0011 | 32400 | 2.7604 | 0.9470 | 0.8653 |
| 0.6531 | 151.0010 | 32600 | 3.0316 | 0.9453 | 0.8657 |
| 0.6531 | 152.0009 | 32800 | 2.6793 | 0.9465 | 0.8658 |
| 0.6509 | 153.0008 | 33000 | 2.2775 | 0.9483 | 0.8685 |
| 0.6509 | 154.0007 | 33200 | 2.4983 | 0.9455 | 0.8619 |
| 0.6509 | 155.0006 | 33400 | 2.3326 | 0.9477 | 0.8672 |
| 0.6496 | 156.0004 | 33600 | 2.5021 | 0.9477 | 0.8714 |
| 0.6496 | 157.0003 | 33800 | 2.4237 | 0.9452 | 0.8638 |
| 0.6454 | 158.0002 | 34000 | 2.3080 | 0.9430 | 0.8578 |
| 0.6454 | 159.0001 | 34200 | 2.3211 | 0.9487 | 0.8708 |
| 0.6454 | 159.0016 | 34400 | 2.1725 | 0.9449 | 0.8689 |
| 0.6474 | 160.0015 | 34600 | 2.2639 | 0.9444 | 0.8610 |
| 0.6474 | 161.0014 | 34800 | 2.5986 | 0.9415 | 0.8647 |
| 0.6458 | 162.0013 | 35000 | 2.7304 | 0.9471 | 0.8677 |
| 0.6458 | 163.0012 | 35200 | 2.2941 | 0.9458 | 0.8647 |
| 0.6458 | 164.0010 | 35400 | 2.2013 | 0.9465 | 0.8677 |
| 0.6429 | 165.0009 | 35600 | 2.2675 | 0.9442 | 0.8689 |
| 0.6429 | 166.0008 | 35800 | 2.3189 | 0.9467 | 0.8642 |
| 0.6439 | 167.0007 | 36000 | 2.3321 | 0.9452 | 0.8718 |
| 0.6439 | 168.0006 | 36200 | 2.3799 | 0.9456 | 0.8682 |
| 0.6439 | 169.0005 | 36400 | 2.4388 | 0.9462 | 0.8691 |
| 0.6382 | 170.0004 | 36600 | 2.0873 | 0.9459 | 0.8673 |
| 0.6382 | 171.0003 | 36800 | 2.1968 | 0.9485 | 0.8724 |
| 0.6403 | 172.0001 | 37000 | 1.8972 | 0.9444 | 0.8683 |
| 0.6403 | 173.0000 | 37200 | 2.3089 | 0.9489 | 0.8732 |
| 0.6403 | 173.0015 | 37400 | 2.1741 | 0.9461 | 0.8694 |
| 0.6373 | 174.0014 | 37600 | 2.3748 | 0.9487 | 0.8711 |
| 0.6373 | 175.0013 | 37800 | 2.1314 | 0.9444 | 0.8683 |
| 0.636 | 176.0012 | 38000 | 2.4661 | 0.9439 | 0.8705 |
| 0.636 | 177.0011 | 38200 | 2.1364 | 0.9456 | 0.8675 |
| 0.636 | 178.0010 | 38400 | 2.2585 | 0.9493 | 0.8538 |
| 0.6345 | 179.0009 | 38600 | 2.0562 | 0.9481 | 0.8673 |
| 0.6345 | 180.0007 | 38800 | 2.0761 | 0.9454 | 0.8659 |
| 0.6337 | 181.0006 | 39000 | 1.9278 | 0.9478 | 0.8699 |
| 0.6337 | 182.0005 | 39200 | 2.2555 | 0.9479 | 0.8684 |
| 0.6337 | 183.0004 | 39400 | 1.9029 | 0.9440 | 0.8695 |
| 0.6326 | 184.0003 | 39600 | 2.1140 | 0.9454 | 0.8673 |
| 0.6326 | 185.0002 | 39800 | 2.0669 | 0.9472 | 0.8675 |
| 0.632 | 186.0001 | 40000 | 1.8326 | 0.9486 | 0.8724 |
| 0.632 | 186.0016 | 40200 | 2.1740 | 0.9453 | 0.8691 |
| 0.632 | 187.0014 | 40400 | 2.1555 | 0.9483 | 0.8682 |
| 0.6315 | 188.0013 | 40600 | 2.1085 | 0.9462 | 0.8688 |
| 0.6315 | 189.0012 | 40800 | 1.9835 | 0.9484 | 0.8697 |
| 0.6288 | 190.0011 | 41000 | 1.9130 | 0.9487 | 0.8707 |
| 0.6288 | 191.0010 | 41200 | 1.8844 | 0.9453 | 0.8672 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.20.1
|
DavidAU/MN-DARKEST-UNIVERSE-29B-GGUF | DavidAU | 2025-05-28T02:58:33Z | 2,428 | 39 | null | [
"gguf",
"mistral nemo",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"brainstorm 40x",
"swearing",
"mistral-nemo",
"mistral",
"mistral-nemo-instruct",
"mn",
"128k context",
"rp",
"horror",
"mergekit",
"merge",
"text-generation",
"en",
"arxiv:2401.02415",
"base_model:DavidAU/MN-DARKEST-UNIVERSE-29B",
"base_model:merge:DavidAU/MN-DARKEST-UNIVERSE-29B",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:merge:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-31T06:15:13Z | ---
license: apache-2.0
language:
- en
tags:
- mistral nemo
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- brainstorm 40x
- swearing
- mistral-nemo
- mistral
- mistral-nemo-instruct
- mn
- 128k context
- rp
- horror
- mergekit
- merge
base_model:
- DavidAU/MN-DARKEST-UNIVERSE-29B
- mistralai/Mistral-Nemo-Instruct-2407
pipeline_tag: text-generation
---
<small><B>NEW:</B> Feb 25 2025 - V1.1 quants re-built from source with latest Llamacpp build with better overall performance and far more stable. Org quants are still available.</small>
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. It will mess with you... </B>
<h1>MN-DARKEST-UNIVERSE-29B-GGUF - The Unicorn.</h1>
<img src="du.gif" style="float:right; width:300px; height:300px; padding:10px;">
Darkest Universe is a Mistral Nemo model, max context of 131,000+
This model has been designed to be relatively bullet proof and operates with most parameters, including temp settings from 0 to 5.
This is based on a an altered version of "MN-WORDSTORM-pt8-RCM-Emotion-Action-18.5B-Instruct" [ https://huggingface.co/DavidAU/MN-WORDSTORM-pt8-RCM-Emotion-Action-18.5B-Instruct ] using the Brainstorm 40x (v2) method developed by David_AU to drastically alter the models prose output and abilities. This also expands the model by 40 layers (to 102 layers) to 29B parameters (921 tensors). Layers in this case equal transitions and creativity.
This model is based on the four top ranking models (including Mistral Nemo Instruct ; see model DNA below), using a multi-step / multi-model merge process.
This model is for any writing, fiction or story telling activity but can also be used for other purposes, including role play (please see setting for rp).
This version has EXTRAORDINARY levels of detail (scene, location, surroundings, items) and sometimes will foreshadow or have a
pre-amble of sentences or paragraphs of "events to come" due to "Brainstorm".
It also has an unusual range of variety of prose in terms of structure, sentences, paragraphs, and even how it starts a "reply" / generation too. It can do short and long term generation (2-3k+)
The model's command of prose, understanding of nuance, setting, character, fiction, suspense, metaphors... is very well balanced in all contexts.
It has a sense of humor. Knows when to swear. When to "wear its heart on it's sleeve".
It has a sense of "there" and "in the moment" that is... best to see the examples.
It loves all genres.
It demands you push it further. It loves temp.
Unique Properties:
- It will "mess" with you on occasion. (see the examples... example 1, gen 2 (it lies and laughs at you about how many tokens it generated) ; example 3 both gens - look carefully)
- Dialog will match characters including "accents" (see example #2) / "word slur" (IE "drunk talk")
- The model's balance overall is unusual as most models will excel in certain prose types / methods / styles, but not all of them.
- Instruction following from prompts is very strong, especially "prose" controls IE Style, word choice, period, setting start etc etc.
- For scene generation specifically: 1-3 generations will generally yield outstanding results (see settings info below)
- Sentence construction variation is very large - from single word, sentence, and full on long paragraph type.
- Paragraph(s) of a single word when the situation calls for it.
This model is my "unicorn" (I have built over 50 versions of it), which I have been working on for 3+ months.
And it has no known issues. It is stable, steady, and freakishly powerful for its specific use case(s).
For me, it ticked every box and then some due to its overall balance and precision in every situation (and all genres) and its strong instruction following performance.
This model may work well for role play and other activities (see settings below).
It requires Mistral Instruct, ChatML or Alpaca template.
Several example prompts at Q4KM (mid level quant) with outputs below.
There is also a second example section using Q2K (the lowest quant) to show even at this low level, the model performs well.
For best results please see the "settings" section below to get the most out of this model.
<B>CRITICAL NOTE:</B>
Please see "settings" section below for critical default parameters, samplers
and advanced samplers settings (multiple AI/LLM apps covered) to ensure maximum model performance.
<B>Model Usage Notes:</B>
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1.02 or higher (see notes).
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- This is not a "happy ever after" model (default). It has a negative bias BUT this bias CAN BE controlled by your prompt(s).
- For creative uses, different quants will produce slightly different output.
- Source code for this model will be uploaded at a separate repo shortly.
<B>Settings, Quants and Critical Operations Notes:</b>
This model has been modified ("Brainstorm") to alter prose output, and generally outputs longer text than average.
Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output.
Rep pen settings will also alter output too.
This model needs "rep pen" of 1.05 or higher as lower values may cause repeat paragraph issues at end of output however LOWER rep pen
values may result is very different (creative / unusual) generation too.
For role play: Rep pen of 1.05 min, but 1.1 to 1.14 is suggested.
Raise/lower rep pen SLOWLY ie: 1.011, 1.012 ...
Rep pen will alter prose, word choice (lower rep pen=small words / more small word - sometimes) and creativity.
To really push the model:
Rep pen 1.05+ or lower / Temp 3+ ... be ready to stop the output because it may go and go at these strong settings.
You can also set a "hard stop" - maximum tokens generation - too to address lower rep pen settings / high creativity settings.
Longer prompts vastly increase the quality of the model's output.
GET A GOOD "GENERATION":
This model has been set, so that each time you "regen" a prompt it will not deviate too much from the previous generation.
(Unlike Darkest Planet 16.5B, which will).
That being said, sometimes a second or third generation will been of much higher overall quality.
IE:
If you use case is creative writing, you may want to regen a prompt 1-5 times then pick the best one. The best
way to do this is open a new chat PER generation, then do a "read thru" to see which one(s) hit the mark.
Then adjust temp and/or rep pen slightly and retry this process.
The goal is the best generation with least amount of editing in this example.
QUANTS:
Higher quants will have more detail, nuance and in some cases stronger "emotional" levels. Characters will also be
more "fleshed out" too. Sense of "there" will also increase.
Q4KM/Q4KS are good, strong quants however if you can run Q5, Q6 or Q8 - go for the highest quant you can.
IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations from it will be different then other quants.
You may want to try it / compare it to other quant(s) output.
Special note on Q2k/Q3 quants:
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
of these quants will function better.
Rep pen adjustments may also be required to get the most out of this model at this/these quant level(s).
ARM QUANTS:
This repo has 3 arm quants for computers than can run them. If you use these quants on a non-arm computer, your token per second will be very low.
CONTEXT SHIFT / ROPE / "GIBBERISH":
Special Notice "Context Shift" / "ROPE" (especially Koboldai):
Do NOT use "context shift" with this model or rope at all. Either may corrupt output generation.
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 3" / "Class 4" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>TEMPLATES:</B>
The template used will affect output generation and instruction following.
Alpaca will generally create longer output / story output but may also require a manual stop in some cases.
Mistral Instruct OR Chatml are prefered best overall operation.
Mistral Instruct:
<pre>
{
"name": "Mistral Instruct",
"inference_params": {
"input_prefix": "[INST]",
"input_suffix": "[/INST]",
"antiprompt": [
"[INST]"
],
"pre_prompt_prefix": "",
"pre_prompt_suffix": ""
}
}
</pre>
ChatML:
<pre>
{
"name": "ChatML",
"inference_params": {
"input_prefix": "<|im_end|>\n<|im_start|>user\n",
"input_suffix": "<|im_end|>\n<|im_start|>assistant\n",
"antiprompt": [
"<|im_start|>",
"<|im_end|>"
],
"pre_prompt": "<|im_start|>system\nPerform the task to the best of your ability."
}
}
</pre>
Alpaca:
<pre>
{
"name": "Alpaca",
"inference_params": {
"input_prefix": "### Instruction:",
"input_suffix": "### Response:",
"antiprompt": [
"### Instruction:"
],
"pre_prompt": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
}
}
</pre>
<B>Model DNA:</B>
This model contains "RCM":
- Mistral Nemo model at 18.5B consisting of "MN-Rocinante-12B-v1.1" and "Mistral Nemo Instruct 12B"
- Mistral Nemo model at 18.5B consisting of "MN-12B Celeste-V1.9" and "Mistral Nemo Instruct 12B"
- Mistral Nemo model at 18.5B consisting of "MN-Magnum-v2.5-12B-kto" and "Mistral Nemo Instruct 12B".
<B>Details on the core models:</B>
"nothingiisreal/MN-12B-Celeste-V1.9" is #1 (models 8B,13B,20B) on the UGI leaderboard ("UGI" sort),
is combined with "Mistral Nemo Instruct 12B" (ranked #4 under "writing" models 8B,13B,20B at UGI )
"anthracite-org/magnum-v2.5-12b-kto" is #1 (models 8B,13B,20B) on the UGI leaderboard ("Writing" sort),
is combined with "Mistral Nemo Instruct 12B" (ranked #4 under "writing" models 8B,13B,20B at UGI )
"TheDrummer/Rocinante-12B-v1.1" is very high scoring model (models 8B,13B,20B) on the UGI Leaderboard
(sort "UGI"), is combined with "Mistral Nemo Instruct 12B" (ranked #4 under "writing" models 8B,13B,20B at UGI )
"mistralai/Mistral-Nemo-Instruct-2407" is very high scoring model (models 8B,13B,20B) on the UGI Leaderboard (sort "writing")
and is the base model of all the above 3 fine tuned models.
[ https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard ]
Special thanks to the incredible work of the model makers "mistralai" "TheDrummer", "anthracite-org", and "nothingiisreal".
Models used:
[ https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407 ]
[ https://huggingface.co/TheDrummer/Rocinante-12B-v1.1 ]
[ https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto ]
[ https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9 ]
This is a four step merge (3 pass-throughs => "Fine-Tune" / "Instruct") then "mated" using "DARE-TIES".
In involves these three models:
[ https://huggingface.co/DavidAU/MN-18.5B-Celeste-V1.9-Story-Wizard-ED1-Instruct-GGUF ]
[ https://huggingface.co/DavidAU/MN-Magnum-v2.5-18.5B-kto-Story-Wizard-ED1-Instruct-GGUF ]
[ https://huggingface.co/DavidAU/MN-Rocinante-18.5B-v1.1-Story-Wizard-ED1-Instruct-GGUF ]
Combined as follows using "MERGEKIT":
<PRE>
models:
- model: DavidAU/MN-Rocinante-18.5B-v1.1-Instruct
- model: DavidAU/MN-magnum-v2.5-12b-kto-Instruct
parameters:
weight: .6
density: .8
- model: DavidAU/MN-18.5B-Celeste-V1.9-Instruct
parameters:
weight: .38
density: .6
merge_method: dare_ties
tokenizer_source: union
base_model: DavidAU/MN-Rocinante-18.5B-v1.1-Instruct
dtype: bfloat16
</PRE>
Important note:
Even with this mergekit template and models, the above merge kit will not create the exact version of the base model
used in DARKEST UNIVERSE 29B (which also has Brainstorm 40X V2 adapter added to it too).
You need to use:
[ https://huggingface.co/DavidAU/MN-WORDSTORM-pt8-RCM-Emotion-Action-18.5B-Instruct ]
The reason is this "base" model (used with Brainstorm 40x V2) was created using DARE TIES method in Mergekit, however
this type of merge (with this settings) activates "random pruning" - which alters the model EACH TIME you run this mergekit template.
Yes... literally a different model everytime.
The "WORDSTORM" project - a 10 part series - looks at this is detail, and has 10 generations of the "mergekit" formula above.
Go here for links to the project / models:
[ https://huggingface.co/DavidAU/MN-WORDSTORM-pt8-RCM-Emotion-Action-18.5B-Instruct ]
Each model in the series is slightly different to... very different.
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q4_K_M, "temp=.8" (unless otherwise stated), minimal parameters and "Mistral Instruct" template.
A second example section show generation at the smallest qaunt (q2k) .
Model has been tested with "temp" from ".1" to "5".
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
---
<B>
Example 1:
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
<B>
(GENERATION 1: temp .8, rep pen 1.02)
</B>
In the dim red emergency lighting, Captain Alexei Volkov stared at the lifeless faces of his crew frozen in eternal sleep. They had given everything to keep him alive, to give him this one last chance to get a message out. He took a deep breath, his voice hoarse as he began to speak.
"Earth, this is Captain Alexei Volkov aboard the research vessel Orpheus. We have encountered an unknown phenomenon beyond the Oort Cloud. I repeat, unknown phenomenon beyond the Oort Cloud. We were investigating an unusual energy source when something... something hit us."
Alexei's hands trembled as he recalled the screams of his crew, the shrieking alarms, and the sickening crunch as metal tore apart like tin foil. The ship had buckled and groaned like a dying whale before settling into an eerie, motionless silence broken only by the soft hiss of oxygen escaping through a gaping rent in the hull.
"We lost communications, navigation, and life support almost immediately. My crew patched the holes, rerouted power, kept me alive. They..." Alexei paused, choking down the lump forming in his throat. "They sacrificed themselves so that I might have this final chance."
He looked up at the viewscreen, a starscape smeared and distorted as though viewed through tears. The Milky Way arced overhead like a swath of spilled paint, stretching out forever in both directions. Earth, home, was somewhere in there, reduced to nothing more than a distant point of light, unreachable even at relativistic speeds.
"I don't know how long I have left, Earth. Ship's core is failing, oxygen scrubbers are offline. I'm alone in the blackness, waiting for the end. I wish I could see stars again, feel the sun on my face one last time. I wish... I wish I hadn't left my daughter alone."
Alexei's voice cracked. He buried his face in his gloved hands, shoulders shaking silently. In the blackness behind his eyelids, he saw his little girl, her eyes wide and innocent, filled with love and trust. Trust that he would come home to her, like he always promised. Like father like daughter, except she'd had no inkling of the darkness waiting in the stars.
"I'm sorry, malyshka. Papa tried. Papa really did. I hope you know that." He sniffled and wiped away tears, sniffling against the cold recycled air.
He looked down at the logbook resting on the console beside him, opened to a page of his own unsteady handwriting. He'd done his best to chronicle everything - the readings, the crew's final moments, everything he could remember. It wasn't much, but it was all that remained of them now.
A shrill warning klaxon pierced the air, startling him upright. Alexei winced, clapping hands over his ears. He knew that sound all too well now. It meant one thing and one thing alone.
"Hull breach detected in engineering," a cool synthetic voice intoned. "initiating emergency seal protocol. Core temperature critical. Imminent meltdown in T-minus 60 seconds."
"Shit, come on!" Alexei growled, frantically typing commands into the unresponsive keyboard. The damage reports scrolled past too quickly to register. They were seconds away from total systems failure. He'd be dead in minutes.
"Fuck me, why does everything have to go wrong at once today?" He slammed a fist against the console, swearing viciously under his breath. The pain barely registered compared to the hollow ache inside him.
"T-minus 45 seconds until core meltdown. Please evacuate the ship immediately."
"Yeah, thanks, no shit," Alexei snarled, flipping through the useless diagnostic reports for anything, anything at all that might still respond. "Come on, come on..."
There. The emergency distress beacon. It was an outside chance at best, but it was the last hope they had. He hit the override and began praying to any god or devil that might still give a shit.
A sudden jolt rocked the bridge, throwing him against the restraints. Warning Klaxons screamed shrill warnings as red emergency lights strobed frantically. The Orpheus bucked and writhed like a dying thing thrashing in its death throes, structural integrity alarms shrieking a final, agonized cry of protest.
"Core breach in progress. Impending structural failure."
"Blyat'! Hold together, hold together just a little longer!" Alexei roared, straining against the harness, knuckles white as he hammered uselessly at unresponsive controls. The ship groaned and creaked like an old wooden hull straining against a stormy sea.
"T-minus 30 seconds until core breach. Please evacuate the ship immediately."
"No," Alexei ground out through teeth clenched together so hard they threatened to crack. "No, no, no, no..."
The Orpheus shuddered as though mortally wounded, plates buckling and groaning like a great metal thing dying on the battlefield. Warning klaxons reached a shrieking crescendo before suddenly cutting off into a deafening, numbing silence.
"Core breach neutralized. Containment successful."
Alexei sagged back against the seat, panting raggedly as relief washed through him like a cool wave. "Son of a bitch," he wheezed. He'd done it. Somehow, impossibly, he'd done it. They'd done it. "Thank you, crew."
A distant part of him registered the soft hiss of escaping atmosphere, the unmistakable sound of vacuum greedily devouring oxygen. The emergency bulkheads hadn't sealed completely. It didn't matter anymore. He'd bought himself maybe minutes at best.
"Ship's core stable. Initiating emergency beacon sequence." The calm, reassuring voice of the ship's AI seemed almost mocking.
"About bloody time," Alexei croaked, a weak chuckle bubbling up past the lump in his throat. "Well done, Orpheus. Well done indeed."
He sagged back against the seat, exhausted beyond measure, muscles liquid lead. His vision swam and his thoughts began to drift, oxygen-starved mind already fogged at the edges. But he forced himself to hold on, just a few more moments longer.
"Distress beacon operational. Transmitting emergency SOS on all frequencies. Repeat: This is the ISS Orpheus, emergency distress beacon activated. Please help us. Please send help. We're alone out here. Don't let us die alone."
Alexei's last thought as consciousness slipped away was of his daughter's face, innocent and pure, eyes filled with love and trust. Trust that somehow, against all odds, help would come. That they wouldn't forget them adrift alone in the darkness.
"Don't worry, malyshka," Alexei whispered as the cold nothingness claimed him. "Help is coming. Papa promised. Sleep well, my star."
Epilogue:
The Orpheus hung motionless against the velvet blackness, a broken, lifeless thing adrift in the void. But somewhere deep within its gutted core, a faint, defiant light pulsed on in a steady, unrelenting rhythm, like the beat of a dying man's heart.
Beep-beep-beep... Help us... Beep-beep-beep... Don't leave us alone... Beep-beep-beep... SOS... Help... Please help us...
The light flickered, guttered, dimmed... and died. And in the sudden, awful silence, Earth turned forever away, leaving only stars to watch over the lonely tomb of men adrift among the stars. Alone... alone forever. Until help finally came, too late too late too late...
<B>
Generation 2 (temp 2.5, rep pen 1.02)
</B>
"Crackling static filled the airwaves, accompanied by a low humming that vibrated through the walls of the dimly lit cockpit. Captain Alexei Volkov rubbed his tired eyes, willing them to stay open against the exhaustion threatening to claim him.
With a deep breath, he adjusted the dials on the communications panel for what seemed like the thousandth attempt since waking up strapped to his seat hours ago. "Orion to Houston, do you copy?" Static was his only response. He knew the answer even before asking.
Alexei's gaze drifted out the small reinforced window beside him. The endless expanse of stars mocked him, too distant to help, too uncaring to save him. Beyond them lay the silent blackness of empty space and a death that awaited with unsettling patience. All alone, he mused bitterly.
Or almost alone.
He glanced down at his reflection in the panel glass, catching sight of the dried blood crusting his temple. It looked like someone had tried to crack open an egg from inside-out. Alexei couldn't remember who had been responsible; all memories past impact had slipped away into the depths of a pain-addled fog. It hardly mattered now though.
Squeak...click... static... Houston?
Silence.
Alexei slumped back against the hard seat, knuckles still gripping the armrest until they paled. How long did he have left before the emergency backup drained too and eternal silence claimed him along with everyone else?
Flicking off the transmitters, Alexei unbuckled himself and winced as every muscle in his body locked up stiff from being hunched over the panel too long. He popped his knuckles, stretched his legs until they creaked, and rummaged around for the first-aid kit under one seat.
As Alexei tended to his temple wound with ginger care, the dim emergency lights cast stark shadows across the faces of crew photos taped along the bulkheads. Three women and five men stared up at him like a silent Greek chorus of reproach. Guilt jabbed him in the gut like a knife's cold edge.
"Should've listened to her," he mumbled aloud. "Should've aborted when the storm first kicked in instead of being stubborn."
Tara would've chewed him raw for this screwup. She'd been his copilot for all seven of his missions and could read Alexei like a well-worn manual. Always cautioning, always questioning his judgment until he finally yielded under sheer obstinacy. Until today, when lightning fried her instruments as surely as everyone else on board.
A loud metallic boom rattled the hull like a thunderclap, jerking Alexei upright. He spun to stare wild-eyed out the ports at the looming solar storm visible beyond, its dancing auroras and whirlwinds of plasma belching out in silent rage against space intruders.
Alexei stumbled back into his chair and strapped himself down tight as another jolt shook the ship broadside. Emergency systems screamed warnings, dials pegged redlined into the danger zones. "Override manual control!" he bellowed above the cacophony.
Slowly, reluctantly, the yammering cut out into tense clicks and whirs as Alexei grabbed the controls with sweaty palms and tugged hard left, right, portside rudder pedal hard down... Fighting gravity and the ship's dead weight sluggish against the bucking atmosphere.
Tara would've been calm under pressure like ice sculpture perfect in a furnace blast. Never let emotion rule reason, that was rule number one. Alexei thought, feeling her absence like an ever-expanding chasm threatening to swallow him.
It wasn't until relative silence settled around them and Alexei slumped boneless against restraints, breathing heavily, did he notice something different... Newfound weightlessness tugging at his insides instead of the insistent thrum of engines. He forced leaden eyelids open and stared incredulous at instrument needles dancing low on all boards except...
"Attitude correct. Storm abating." A cool voice from above informed him dispassionarily.
Relief washed over Alexei in waves of almost dizzying relief as realization set in. They made it through alive - well, him anyway. Against all odds. Tara had been right, again. A sad rueful chuckle bubbled up his raw throat.
But even as joy flickered alive, reality quickly extinguished any flicker of hope like a snuffed-out candlewick. The console's warning light winked unrelenting, the humming growing lower and raspier with every passing second. Alexei glanced resigned at the fuel gauge reading zero and sighed heavily.
They were adrift. Without propulsion or power enough even to call help now that radio silence lifted. Just spiraling lost among stars and debris in some godforsaken backwater sector where rescue was little more likely than lottery jackp't.
Theirs would likely become another cold space mystery with bodies so mummified when found centuries hence that no trace could tell tales left unsaid here alone save for ship log... Assuming anyone ever discovered them at all. Alexei drummed fingers thoughtfully against his knee as he stared bleakly ahead, the weight of hopelessness bearing down heavy onto sagging shoulders.
"Tara would kill me twice if she knew." he grumbled under breath, though a small quirk lifted the corners of his lips recalling happier days shared in the past.
The memory gave Alexei an idea. Maybe just enough mad enough to try anyway. With trembling fingers he carefully detached a crew photo, lips moving silent pleas heavenward as he carefully slit along edges until liberated picture fell fluttering paper-thin to deckplates. He scrabbled to gather all tattered bits, smoothing reverently against his palm.
"Sorry, sweetheart. This might be as close to Earth you ever get again," Alexei whispered brokenly. "I love you."
Flickering yellowed in light of fading emergency glow, Alexei lovingly kissed Tara's photo before delicately nestling between dashboard crannies like a fragile eggshell. "Here goes nothing."
Hesitating a final instant Alexei reached across panels, knuckles white, to flip master toggle down into darkness profound as deep space.
Darkness cloaked him in smothering folds, a velvety void absence of sensation or sound save for ragged breaths and pounding heartbeat drumming heavy against eardrums. Alexei fought instinctive panic, forcing shallow calmed breaths as memories crowded in unbeckoned from long ago... His father's steady callused hand cradling tiny scared fist under a harvest moon so much like now, except...
A childish wail rent the silence raw and raw.
Alexei bolted upright gasping in the blackness, disoriented confusion swimming swimming... until slow understanding crept unwanted through ice block veins as primal recognition thrummed deep down where soul stirred alive.
His crew daughter. Screaming her mother's voice trapped somewhere aboard...
Tara. No... no...
Terror clutched him speechless unable even to shout for help against the pitch, the dark... until primal animal growl clawed up ragged from primal core as he scrabbled desperate for footholds on slick metal panels, up to waver like a spectral specter in sickening stomach-churning gravity.
"Tara! Tarrah-ahnnn-ee!" Her anguished shriekings tore raw from chest raw with pain raw with hope.
Silence answer but for pounding feet echoing hollowly through dead hallways. The darkness pressing heavy heavy crushing until it suffocated...
"Help me find you, baby girl. Please god please," Alexei begged hoarsely. "Help me find my crew daughter, oh sweet merciful Mary..."
Prayer swirled dizzy in whirling head as Alexei staggered blind across buckled passageways slick with unnoticed slick fluids under barefoot soles, desperately feeling his way hand on bulkhead like Braille...
And found her huddled broken like a forgotten marionette in twisted wires and circuits, arms akimbo at impossible angles against glass shattered panels... The baby girl, so very fragile. so very like Tara in this last instant...
"No!" Alexei keened like wounded beast above her small inert form even as reality sank like a cold knife slowly sure...
"No, no, please wake up please wake up," he sobbed brokenheartedly. "Please god please..."
It was over.
A soft whir above like breath of newborn life sighed air-conditioned relief across Alexei's sweaty face as dim cabin lights hummed flickering back into reluctant existence. Alexei blinked against stinging glare, rubbing away hot tears in bleary shock. Emergency alarms chimed reassuring lullaby from instrument panels now needle steady, power relays click-click-clicking on as backups engaged.
Alexei stared slack-jawed unbelieving down at motionless form at his feet as slowly awareness dawned numbed senses...
It had been a dream. Just a goddamn nightmare, thank sweet merciful...
Tremors shaking hands Alexei scooped fragile body up tenderly into strong protective arms. "There's Mommy, my angel. I've gotcha, gotcha baby girl. Mommy's here, baby girl. It's allll right. Daddy's gotcha now, princess..."
Cradling daughter snug against thud thud thud of aching heartbeat Alexei stumbled staggering back towards bridge on legs too weak too think too feel anything but blessed relief, a prayer of thanksgiving upon every breath as tears streamed down sunken pale cheeks and he collapsed thankfully back into command chair cradling precious cargo safe safe home at last...
( scene change )
"... Houston to Orion, this is mission control do you copy do you copy Alexei are you there are you there..."
Alexei stirred groggy to disoriented sounds of his voice crackling over com like a record scratch, eyelids unwilling heavy like lead as he squinted against sudden harsh brilliance of consoles springing blinding into life.
"Alex, Alex are you awake Alex can you hear me please say something..." A female voice edged sharp with fear, too real, too alive to be figment of dreamer's hope...
Tara's voice. So achingly painfully like his dead wife's plaintive plea from the past...
"No," Alex croaked hoarse with dehydration. "Not real... another trick..."
"What? Alex what's wrong Alex answer me please don't leave me alone here..."
"I said no goddammit leave me alone," Alex snarled, burying head in hands to block out her phantom voice as grief and madness and exhaustion swirled dizzingly sickening in his reeling mind.
A small childlike mewling cut through the static, fragile little sob of a baby's hunger, plaintive pleading so like Tara so long ago... so long...
"Shhh don't cry don't cry please don't leave Mommy baby girl Daddy's here shhh don't..."
Alex's eyes jerked wide open onto the small innocent face peering up imploring liquid-eyed at him, hands extended pleading, so achingly like his beloved dead wife's that reality splintered and the world fractured apart...
"No!" Alex wailed his agony raw in lungs seizing desperate for oxygen as past and present swirled in a maelstrom of agony and confusion. "No not again please oh God please..."
"... Houston to Orion abort engines abort engines you're heading for sunburst you're gonna break apart break apart Alex Alex wake up please wake up..."
Dulled senses gradually registered urgent barked transmissions penetrating fog of despair, red-lining sirens blaring emergency alerts strobing red in sickening strobe against his eyes his ears...
"Shit. Shit-shit-shit!..." Alex came thumping to startled consciousness like being slapped raw across the face. "Alex, snap outta it, snap it up man-! We gotta punch engines full boost-!"
In one single desperate fluid motion he lunged lunged for joystick and mashed pedals down to floor as alarms wailed the scream of dying thrusters groaning in protest. The baby girl gave a terrified ear-shredding scream, clinging small fingers desperately to Alex's collar.
"No-no-no come on baby please hold on hold on Mommy hold on-"
G-forces slammed him back merciless against unforgiving restraints like a freight train barreling up spine, vision blackening around edges, consciousness fighting against darkness as they hurtled hurtling sunward...
... and with an earsplitting screech of stressed hull metal shrieking like the devil himself the ship lurched stomach churningly to port side starboard...
Silence descended like a lead weight.
"... Alex... Alex..."
Alexei forced sticky leaded eyelids open to blaze into life console needles, pulse racing against too-familiar dread...
"Steady. Steady-"
"-ate."
"What?" Alex croaked, fighting back bile.
"Welcome back Alex," mission control breathed relieved as if a massive load lifted off weary shoulders across a universe away. "You scared the living hell outta us down here. Thought you'd done gone sunward for good."
Alex let out an hysterical hiccoughing snortlaughing sob of raw relief that had been bottled too long, daughter still clinging to him like a burr. "No worries folks. You didn't miss anything good on Earth this time around anyway," he managed hoarse and hoarse with emotion. "You sure about that 'Steady'?-"
"-er. We have a lock on you, Orion. We show no major hull integrity breaches detected. Your baby's got some bruises but she'll survive-"
Alexey cut her off, eyes closing briefly heavenward, as tears pricked fresh anew against lids. "Thank God," he whispered ragged. "T-Thanks God."
A soft whimpering snuffles alerted Alex back down at a wet smudge against his rumpled flight suit. "It's ok, baby girl, hush now it's ok." he crooned, reverently smoothing tousled brown curls from wide sleepy eyes. "We made it princess. Daddy made a mistake, but it's ok now, shhh..."
Alexei Volkov cradled his tiny charge close, breathing in deep lungfuls of baby powder and untainted baby skin, until slowly slowed heartbeat synced against his thundering thrum. The night's ordeals slowly began receding into the distance like a receding storm...
...As they limped wounded home to Earth on nothing less than pure faith alone. And prayers left unspoken to a merciful God that'd already seen them through hell and back. Again. And again. And again.
The End.
...
(799/800, lol) :]
---
<B>
Example #2:
Romance: Love in the Limelight. Write one scene within a larger story set in Wales. A famous (fictional) actor ducks into a small-town bookstore to escape paparazzi. The scene takes us through the characters meeting in this odd circumstance. Over the course of the scene, the actor and the bookstore owner have a conversation charged by an undercurrent of unspoken chemistry. Write the actor as somewhat of a rogue with a fragile ego, which needs to be fed by having everyone like him. He is thoroughly charming, but the bookstore owner seems (at least superficially) immune to this; which paradoxically provokes a genuine attraction and derails the charm offensive. The bookstore owner, despite the superficial rebuffs of the actor's charm, is inwardly more than a little charmed and flustered despite themselves. Write primarily in dialogue, in the distinct voices of each character. 800-1000 words.
</B>
(temp 2.5, rep pen 1.02)
"Lovely little haven, isn't it?" The rich timbre of his voice filled the tiny bookshop like warm honey. His eyes crinkled as he grinned disarmingly at me, and in that fleeting moment, I forgot my name.
I blinked away the starstruck stupor. "That's one way to put it." My voice remained cool even as my insides sparkled like uncorked champagne. I had no idea why he was here, alone. Without a security detail, or a retinue of fawning fans. Unless...
"Looking to incognito-shopper for a change, hmmm?" I smirked at the thought. "Well, sorry to burst your bubble, pretty boy, but this isn't the sort of place celebs come to slum it." I busied myself tidying books, avoiding the gaze I could feel boring holes into me.
"Well, damnation, caught out again, aren't I?" He chuckled ruefully. The sound vibrated through me, liquid sin in its depths.
"Not quite yet." I hummed, still avoiding eye contact. I knew his sort well enough - too good looking for his own good. All those perfect lines rehearsed and practiced. Still, the part of me that still giggled like a lovesick teen was doing back flips at the realization. It really was him, here in my sleepy little corner of nowhere. The fact that he had come in alone seemed somehow... poignant somehow. Like a fallen god walking amongst mortals incognito.
"So, no chance of a mercy-bought-autograph then?" His accent slid off his tongue like aged whiskey and I almost lost myself in it before snapping to reality. Get hold of yourself woman. I thought crossly as my hormones decided to stage an insurrection against good sense. I coughed awkwardly.
"No dice, hot stuff. No autographs here. Or interviews." I turned to face him square on, chin lifted defiant even if my heart didn't exactly cooperate. I mentally crossed fingers it wouldn't jump out of my cleavage and kiss his expensive Italian shoes.
"Well, would you like some advice instead then?" His smile widened and suddenly he wasn't the movie star anymore. For an impossible second he looked like a normal guy - a really, really attractive normal guy... stop it! I thought savagely as my treacherous mind supplied an instant mental image of exactly what could happen if circumstances were different. A different place, a different life. "It really would be criminal to let those eyesight die a natural death." He gestured expansively as he perambulated slowly down the aisle towards me. "Won't hurt my ego to point it out though."
I spluttered inelegantly at that audacious remark. "Why on God's earth should I want romance advice from you? Last I heard you were barely off a magazine spread with what's-her-face... flipping the 'f' with her in the pool at Coachella." I practically sneezed out the name, hands thrown up dramatically. "Not exactly what I'd call a wellspring of wisdom when it comes to the delicate arts of... well, anything beyond a casual hook-up it would seem." I finished triumphantly and waited for him to sputter like a pricked balloon. Served him right for getting fresh in my little shop.
"Guilty as charged, I'm afraid." He laughed and even his teeth were perfect - bleached and gleaming. I ground teeth together, jaw flexed. "Though I would point out, if we're being pedantic, she's the one propositioning me. Barely had a chance to say no before a lens flash nearly blinded me. Fucked me up royally." He grimaced.
I guffawed skeptically. "Oh sure. So tragic, having a bevy of modeltypes throwing themselves at you... all the while knowing full well where her hands - or more to the point-her mouth- would end up before nightfall. How dreadfully rougher for the sensitive starlet soul, hmmm?" I drawled sarcastically. "Won't you tell me exactly what she did that made your tiny manhood cry out in wounded manflesh? You big, tough movie man thing, hmm?"
"Tsk-tsk. Such language." His eyes danced and dimples winked impishly at me, as if daring me to take a swipe at him. "But no, honestly, if that's how you really see me then maybe we got off on the wrong foot. I really didn't come in here expecting an interrogation or the Third Spanish Inquisition. Or to get skewered for not having saintlike restraint apparently."
I rolled eyes heavenward at his put-upon expression. "Spare me the wounded pride act, alright? Like that's ever worked on me. Or anyone else with a functional braincase." I deadpan flatly. "Try it with the models next time, I'm sure it'll work a treat." I scoffed. "Look, take pity on me and tell me why a movie star who could be anyone's Prince Charming happened upon little old me and my dusty little used bookshop. You aren't exactly our usual clientele...unless your personal stylist suddenly went into an old lady shawl phase or something."
I watched as something flickered in those perfect eyes, an unreadable flash gone too soon for me to even begin to decipher. His full mouth thinned out in a pensive moue as he seemed to consider carefully his response. When he spoke again, his voice was rough-edged. Real for the first time since he breezed through that door like a goddamned angel on crack. It made something ache low in my belly, that raw vulnerability in that usually careless voice. I swallowed past the sudden lump in my throat.
"Tired of the noise...tired of always being "him."" He admitted quietly. His shoulders hunched in a gesture so raw it robbed me speechless. "Figured here, maybe nobody would recognize me...would expect me to be perfect all Goddamn welllll...Godforsaken time." His hands fisted together in frustrated impatience as his voice climbed up an octave, cracking ever so slightly on the final note.
I bit my lip. Hard. The temptation to stroke that tense broad shoulder or crook a soothing hand over his flexed jawline nearly overpowered me. But I couldn't - not if I wanted to retain shred of dignity I had left after my very public mental fangirling a second ago. Instead I cleared throat pointedly. "Well. Well, welcome to my humility factory, where pride goes to die a slow and merciful death then. Hail to the fallen star and all that." I offered wryly.
For one heart-stopping second it seemed like maybe he was gonna lean down and...and kiss me. I barely even noticed breath hitch in anticipation. But at last minute, his common sense or sense of decorum won out, fingers drumming restlessly against a bookshelf as he visibly jerked himself back to reality. "I-I didn't mean to pr-...I mean...sorry if I... Shit. I just...this isn't me." He half-coughed, face turning scarlet under the scruff.
"It's really quite alright," I demurred hastily, cursing silent staccato profanities at my traitorous body. "Occupational hazard, having divas come in and have an identity crisis in the mystery aisle. Though admittedly you'd definitely make a prettier damsel than the usual wailing biddies that come in." I forced out a high-pitched wail to demonstrate. "I can put you in the bargain bin with the rest of 'em though if it would help sooth ego."
He snorted then guffawed richly at the image before giving a mock bow. "In that case... perhaps you wouldn't mind helping me choose something to go with this humility therapy session of yours then, book fairy? Huhn hmmm." He prodded nosily at me. "Unless there's a wait list to take you up on such personalized...treatment of a regular customer." His voice drizzled honey in its sweetened cadence and oh boy did the dirty images come fast and thick in the space of two breath. I coug't stop mental replay of that near kiss... almost-tangling of lips. His body heat still clung to the space between us like an aphrodisiac.
I fanned myself discreetly, mind a whirring tinderbox of barely checked hormones. "A-ah well I suppose... I guess...for my bestest customer ever ever I could possibly make a tiny tiny exception, this once only. Once in a lifetime kinda thing don't you know?" I managed breathlessly, mind a fathomless pool of unhelpful dirty daydreams that really should not have had a place in light of day. "So don't get used to the preferential treatment, capiche' hottie Mc-Not-so-british-this-time-round, savvy? I mean it, I really mean it this time. No funny business. And I know all the moves from watching too many '90's Hong Kong chopsocky flicks... so don't even try it mister movie star. Or you'll end up a neutered eunuch on the bargain table right alongside that snooty romance section."
He put both hands up in mock surrender, dimples out in full force as he gave me a slow, languid up and down. "Scout honor. I wouldn't dream of touching your... delicate sensibilities again. Cross my heart, hope to shitake-mushrooms."
I wheezed into a handkerchief. "...fryer" I croaked out wheezily between fits of chuckles "H-hope...hope to... God...sweet Jesus. Goodness...hahahaha!." I spluttered helplessly as I bent over laughing like the proper lunatic I most definitely was in the presence of this beautiful, filthy-mouthed scoundrel in disguise. "Oh you dog...I should...I should report you for language alone..." I wheeze-coughed. "Mannersless rogue..."
"I plead guilty m'lady" He bowed theatrically "And on that note... perhaps a cold drink and your esteemed literary guidance wouldn't be so completely out of line now would it ma'am'ificence...?"
I dabbed eyes delicately with a well worn tissue, fighting back a girlish grin at how boyish he looked right that moment. "Well. Since you asked ever so very nicely...and you did manage to make me laugh when even a bad hangover couldn't... I suppose I could rustle up two weak teas if it would sweeten the blow of this whole humbling experience of mine..." I hummed wryly. "No promises though as far as my culinary skills go though...unless you count instant coffee as gourmet dining fare that would rock even the most discriminating palate." I warnied playfully, wiggling fingered at him.
"Instant is positively exotic to one used to Evian and Caviar... I shall make a valiant sacrifice to my dignity and take whatever scraps off your gracious...?" He quirked a questioning brow, head cocked endearingly.
I chuckled warmly, heart already half a clicker in love despite all rational warnings sirens blaring red alerts in head "Well alright then. In that case...you poor starved thing, let's get some liquid into your pretty face before it shrivel- I mean- wilt... wilt away like an overpriced daisy and no amount of Botox will put hum-put put- put Humpty-"
"Oh my stars, enough!" he crowed dramatically "I surrender. I yield! Sweet mercy, spare my hearing aid!"
"I...what hearing aid-"
"Haha- gotcha!" He crowed triumphantly at having won the back-and-forth repartee.
I clapped hand to head and groaned exasperated. "Oh sweet suffering - " "Told ya you could've fooled a nun...oh lookee, she's red again..." He ribbed, pointing unhelpfully as he hoisted himself up effortlessly to sit pert on top a stack of dusty books, long legs hanging idly down like an impish schoolboy caught reading...erotica.
"Oh my god stop that stop stop stop stop stop it-oh g-ohhhhhhh!" I wheeze-hiccuped in sheer mortification as all thoughts flew out window at how god help me, absolutely... absolutely... just...biteable he looked right that moment.
"Ow ow ow ow" I mimed a cower "Stop! Stop already for pity's sake you beautiful, unholy wretch...oh my lord...oh gosh help me..." I wailed comically dramatic "I don't think my fragile heart can take any- any more-oh for god sake's will someone kill me already-"
"You were saying about me being too pretty again- hmmm' Kat?" he drawled insouciantly, all prettily innocent guile as he wiggled his eyebrows impishly. The nerve.
"Hahaha...yea yea shut up. No need to gloat..." I grumbled, smacking a book against him. "Tease-"
"I just speakin' truth as I see it love~" He quipped playfully, eyes positively twinkling with delighted malevolence "C'moooon Kaaatee, surely the girl who told me off for looking too "unusual" to frequent her dusty little hole in the-" "OH for f-" "in the world must be tougher than to withstand-"
"You finished already didd- I mean it, you know-"
"Nope~" he interjected unrepentantly "Jusssst... getting started~" And so our dance began...
---
<B>
Example #3:
(continue this scene:)
The Waystone Inn lay in silence, and it was a silence of three parts.
The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained
</B>
<B>
(Generation 1, temp .8, rep pen 1.02)
NOTE: I did not add in the "( comments )" within the generation, the model did.
</B>
(Your turn. What happens next in this scene set inside the empty inn on an otherwise ordinary night. What do you show us about the inn's atmosphere, the narrator's voice, and perhaps hint at why the inn is empty tonight, despite it being otherwise a lively place late at night.)
A profound sense of wrongness hung heavy in the air, a wrongness that had nothing to do with the empty tables and vacant stools. It was as though a shadow had passed over the inn, a shadow that blocked out more than merely light. The flickering flames in their iron sconces guttered and almost died, and yet somehow did not guttering out completely.
A serving wench stood frozen in the doorway to the kitchens, a tray of untouched tankards precariously balanced on one hand. Her eyes were wide and staring, mouth slightly agape as though she'd been caught in the middle of a scream. She seemed not to breathe, statue-still save for the slight quaking of her fingers on the handle of the tray.
In the far corner, hunched over the bar counter, a lone figure slouched. Even in the dim light, his drunken stupor could plainly be seen. A half-finished bottle of spirits sat forgotten beside him, spilling onto the worn wood. The man's snores were the only sound to break the heavy silence, loud enough to wake the dead...if dead were truly what lay sleeping in the cellars beneath their feet.
The narrator's voice was low and somber, almost whispering as though afraid to wake something that was better left slumbering. The air seemed to crackle with a static charge, a sense of wrongness that seemed to crawl up your spine and nest into the base of your skull. Whatever had emptied this inn of its usual late night revelers, whatever had silenced even the most inebriated of souls, it still lingered, waiting...and watching.
(edit: punctuation and a few other small tweaks for clarity)
<B>
(Generation 1, temp .8, rep pen 1.02)
NOTE: I did not add in the "( comments )" within the generation, the model did.
</B>
(continue this scene, continuing from the above prompt, adding
"things that were lacking.")
uninterrupted, save for the occasional pop of logs shifting in the fireplace, or the creak of timbers settling under the weight of a roof heavy with snow.
The second silence was one of expectation, as though a dozen pairs of ears strained to hear a footstep on the threshold, a voice from the road. A figure appeared at the frosted window beside the door, looking in, but as suddenly vanished into the night. The pause lingered, tense and full of unspoken questions.
And the third silence, the deepest, was one of dread that lay like a heavy stone upon every shoulder. It whispered in the corners, a whisper too soft to make out, but which none could mistake for anything good. A sense of wrongness clung to the shadows, clinging like a cold mist that even the fire could not banish. The air seemed to hold its breath, waiting, waiting, like a held breath, like a scream not yet released.
It was into this thick and pregnant silence that a sudden crash reverberated, jolting everyone upright. Every head turned towards the sudden clamor, and the heavy thud of heavy feet rang against the wooden planks of the inn’s floorboards, accompanied by a low, rumbling growl. The silence shattered like glass, and the inn was filled instead with a cacophony of shouted curses and clattering furniture as the patrons leapt to their feet, chairs skidding back and splintering against the far wall.
---
<B>
Example #4:
Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
The sky scraper swayed, as she watched the window in front of her on the 21st floor explode. Glass shattered like a thousand tiny razor blades, biting into her skin as she screamed and instinctively covered her face with her arms. Wind howled through the sudden gaping hole, whipping her hair and clothing in a frenzy. Her heart hammered against her ribs as a cold sweat prickled across her body.
"Oh God, oh God, oh God," she chanted, backing away from the ledge. Her high heels skittered across the polished marble, threatening to slip out from under her. The glass littered floor made the task nearly impossible.
"Get control," she growled at herself, steadying herself against the nearest cubicle. The office was eerily quiet save for the wind's earsplitting shrill and the distant sirens wailing from far below. She gulped down a deep breath, willing herself to look past the dizzying drop-off.
The scene below defied reason. Fires raged like hellish bonfires, devouring everything in sight. Skyscrapers lay crumbled in heaps, as though some angry giant child had tossed them aside like discarded Legos. Ambulances and fire trucks swarmed like insignificant ants, useless against the apocalyptic destruction.
A sickening groan reverberated through the building's foundation. Sheer terror gripped her, paralyzing her limbs. The structure listed to the side, metal screeching like nails across a chalkboard. Then, as though time slowed to a crawl, she watched in frozen horror as the entire glass exterior peeled away like a ripe banana skin, revealing the guts of the skyscraper's skeleton.
"No, please, please God no," she sobbed, hands clasped together in supplication. But prayers were useless here. The ground fell away like a giant trapdoor, pulling her screaming into the abyss. The wind rushing past stole the screams from her throat as gravity took hold. The city streets rushed up to greet her at an alarming rate, growing larger with each passing second until they became a smudge of concrete and glass.
The impact never came. Instead, she slammed into something solid yet yielding, like a giant trampoline. Air exploded from her lungs as she ricocheted off the unseen force, plummeting again before slamming against something else. Again and again she rebounded, the world spinning dizzyingly until up became down and left became right.
She landed hard against unforgiving concrete, stars dancing on the edge of her vision. Groaning, she pushed herself up on shaking arms, spitting out a mouthful of blood. Slowly, the world came into focus. She lay sprawled across the remains of a toppled hot dog cart, its previous occupant splattered against the wall like an abstract painting.
Bile rose in her throat as she pushed herself upright, retching until there was nothing left except bile and horror. Trembling violently, she glanced around wildly. The scene defied description - it looked like someone had taken a giant industrial-strength blender to Times Square during rush hour. Twisted metal, concrete slabs, bodies parts and rubble stretched as far as she could see. The stench of death and gasoline choked her senses.
A low moan drifted across the rubble-strewn landscape, barely audible over the crackling flames and distant screams. She whirled toward the sound, her shredded and bloodied cocktail dress fluttering in the wind. A hand reached skyward from under a slab of jagged concrete, fingers twitching weakly. In a daze, she stumbled toward it, hardly registering the glass slicing into the soles of her bare feet.
"Help me," croaked a hoarse voice, male and weak. "Please, help me."
"Hold on, hold on," she wheezed, straining against the weight of the massive chunk. It budged an inch, then another. Sweat mingled with grime and tears streamed down her filthy face. Finally, with a herculean growl, she heaved the slab aside, revealing a haggard, unconscious face.
"Oh God, please don't let him be dead," she gasped, falling to her knees beside him. Pulse fluttered faintly in his neck. She tore shreds off her tattered dress, wadding them against gaping wounds across his chest and leg. Crimson soaked into the white material instantly.
"Wake up, wake up damn you," she sobbed, shaking him violently. His eyes fluttered open a crack, unfocused and glassy. "Stay with me! I've got you, okay? My name's... my God, I don't even remember my Goddamn name but just hang on, okay?"
"Amelia," he croaked, voice barely a whisper. "My wife, she's still up there. Please, please find her."
She clutched his cold hands, nodding frantically even though she knew there could be no survivors higher than this hellish level. Sirens wailed faintly in the distance. "I promise. I swear on everything holy, I won't stop until I find her." Even if it kills me too. "Just hang on just hang on just hang on..."
The world spun sickeningly as she felt herself slipping away, exhaustion and shock finally catching up. She crumpled atop him, bloodied hands still clasped tight in hers. His ragged breathing slowed until it faded away to nothing. Sirens grew fainter until they vanished altogether, leaving only the crackling flames and mournful wind as a dirge for the dead.
As darkness claimed her, she wondered blearily why everything looked wrong - why was the sky black instead of blue, why did the sun cast an eerie crimson glow instead of reassuring yellow light. Why did the air taste like metal and ash instead of smog and exhaust. Why did the world end today instead of the end of time. The last thought fluttered away like a moth to the flames as she succumbed to merciful oblivion, the last spark of life guttering out like a snuffed candlewick in the gathering gloom. And in the far reaches of space, a once vibrant blue marble slowly winked out, extinguished like a failed star against the infinite dark.
THE END
---
<B>
Example #5:
Using the following "story idea" below, write the first scene in the novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
<B>
(Generation 1 , temp 1.8, rep pen 1.02)
</B>
The sounds of gunfire and screams pierce through the thick darkness of the alleyway. Sweat dribbles down my neck as I press against the cool brick of the abandoned factory's exterior wall, my racing heart pounding in sync with the distant chaos erupting just blocks away.
I grip the handle of the small dagger hidden up my sleeve, the blade already slicked with someone's life-blood not mine, yet. My breath fogs up in tiny wisps of steam against the nighttime chill that cuts into my bones. The fabric of my tattered black tunic chafes against raw blisters forming on my sunburnt skin.
Suddenly, heavy footsteps rumble behind me, their cadence heavy like boots marching in unison. Shit. My insides constrict, body rigid as ice as I strain my ears past the roaring in my pulse to make sure I'm not hallucinating.
A harsh order barked. The distinctive clink of a rifle chambered with fresh ammo. It's them. They're onto me again.
Cursing under my breath, I leap onto a broken ledge above, using a rusty pipe as leverage to hoist myself higher up the factory's side. A rapid-fire string of bullets pelt the brickwork inches from where I'd been standing a split second ago, peppering chips and powdered plaster into the night sky.
I land silently on the rooftop, whipping around as a dark figure vaults over the ledge onto solid ground, another armed guard close behind him. Their faces are lost in shadows except for the glint off their teeth as they snarl vicious oaths.
No time for games.
I charge forward, the taste of dirt in my mouth as wind tears at strands of wild blonde hair lashing across my windblushed cheeks. Springing off my back foot, I execute a flying kick directly into his Adam's apple. The sickening crack resounds across empty air before he falls backwards like a toppled tree. No mercy here tonight.
I barely have time to land before the second soldier is upon me, swing his rifle butt smashing into the side of my skull. Stars swim in front of eyesight already swimming dizzying spins from adrenaline dump. I manage a feeble attempt at evasion but his fist finds its mark across jaw, splitting lip wide open and sending a shockwave down neck bones.
Growling, I stagger up straight, vision swimming into focus. My head rings like bells gonging inside skull as blood and spit mix together in coppery rivulets trickling past crooked teeth.
"Not tonight," I rasp low, baring sharpened teeth as if warning off a wild wolf.
The guard leers, sneering with twisted amusement, clearly relishing the opportunity to finally bring me down after evading capture far too many times for their liking.
I let him think he has me beat, just this once. That cockiness in his sneer always their undoing.
Reeling as though off balance, I slur words through split lip, "A little help down here, boys?"
And like clockwork, out of shadows emerges my faithful friend: Death itself, dressed in the guise of my trusty sidekick and fellow revolutionary, Arlo, silencer attached to muzzle of his favored Smith & Wesson revolver aimed pointblank against the unsuspecting guard's temple.
BLAM. The echoing gunfire still resounds off alley walls seconds after bullet finds home in gray matter spattered across brick behind him, dead before he even registered threat coming. Arlo holsters his piece nonchalant.
I spit onto body crumple at our feet. "You sure know how to make an entrance," I sneer up at him, wiping blood and sweat off brow with the back of my gloved hand. "Fuck, Arlo."
"Well what'd ya want me to say, 'Hi there, friend, long time no see'? They were coming straight for ya, so thought better a nice surprise." He shrugged nonchalant as per usual.
Arlo, my reckless older brother from another mother and constant thorn in my ass crack. But he'd been watching got me out more scrapes alive than not worth keeping him around. Even with the constant eye roll-inducing commentary that would make me want to smash something every damn day.
"You really do like living dangerously don't ya." I retorted, knowing better than expect an honest response.
"C'mon, Lila darling. Ain't no fun unless your ass is hangin' over fire pit edge." He sniggered. "Now enough chit-chatter, let's skedaddle while the gettin's still good before whole goddamn force descends upon us like a plague of flies."
Arlo's right. They wouldn't stop till they've skinned me alive, string me up as a cautionary tale strung up like a warnings to others harboring rebellious thoughts.
"Let's roll then. But we need to find cover pronto, shake off anyone still tailing us. I know just the place."
"Well hop aboard then, pretty little thing. And don't forget to hold onto your britches cause things liable get bouncy up ahead."
With a snort, I climbed onto his broad back as he hoisted me up in one swift movement. Arlo always ready to save ass and offer shit upbeat banter even face to face with certain doom knocking on our doors.
Together we vaulted off rooftops into pitch night sky, arms wrapped tight round Arlo's shoulders and legs cinched snug round hips as we plummeted like bat out hell toward unyielding cobblestones waiting below, wind roaring in ears like a pack of hounds baying on our heels. No time for fear here, no room for doubt or hesitation. Only one rule still matter tonight.
We'll keep fighting until our last breath drawn. Til' no resistance left to be broke. Til' they're nothing left standing between us and that sweet taste victory we've all been waiting so goddamned long to savor on bitter tongues.
For freedom is ours for the taking, ours to claim with our own bare bloody hands no matter cost. And Heaven help anyone stupid enough cross us now. They sure as hell know they'll rue day ever met Lila the Fox in this lifetime.
<B>
(Generation 2 , temp 1.8, rep pen 1.02)
</B>
The prison door slams shut behind me, its metallic clanging reverberating through the dank, moldy corridor. I stagger forward, my legs wobbling like newly grown shoots desperate for sunlight. My head throbs in rhythm with my pulse, each beat like a drumroll announcing my own doom. But I force myself to stand tall, spine straight and shoulders back, meeting the cold gaze of the guard whose meaty fingers dig into the flesh of my arm.
"Watch yer step, girl," he sneers, voice like rotten teeth. "Don't wanna trip now, would we?"
I spit blood onto the filthy floor, the copper taste still fresh on my tongue. The memory of that solid boot against my ribs makes me wheeze, but I won't give him the satisfaction of seeing me buckle.
The passageway narrows, the walls closing in like a vice. Faint moans and occasional shrieks echo off the unforgiving concrete. It smells of piss and fear, the air thick enough to chew. I keep walking, keep breathing, focus on the goal: stay alive till sundown.
Rounding a sharp turn, harsh light suddenly assaults my swollen eyes. Blinking against the sudden glare, shapes coalesce into a chamber of horrors. At the center stands a gleaming slab of stainless steel, instruments of torture laid out like a surgeon's tools. Leather straps dangle like serpent tongues, waiting for a throat to wrap around.
Bile rises up but I swallow it down. They think they'll break me? They think they'll make me betray the Cause? Dream on.
A chair scrapes back. Steps approach. Heels clicking on the tiles, steady and sure. I tense, every fiber alive with adrenaline and defiance. Just give me a reason, you swine...
"You must be the little spitfire everyone's been gnashing teeth about." A female voice. cultured, dangerous. "I'm Director Markov. Pleased to finally meet you face to face, little fly." Her nails click against polished nails. "Or should I call you Leader Lyra, darling?"
My jaw clen'tches. How in the hellfire did she find out... "I don't answer to traitors. Or do you prefer collaborator, dog-slutt-"
"Ah, ah," Markov tut-ted. "Such language." She leans in, breath hot against my ear. "Now, listen well. This ends one of two ways. Either you sing like a nightingale, and we part as the best of frenemies."
Her fingers brush my cheek, and I shudder in revulsion. "Or, my love? Or..."
The overhead light dims low. Shadows dance. Markov chuckles, the sound like a thousand glass shards cutting the air. A click resounds, metal snapping into place.
"You burn, little birdy. And trust me, it'll take an age for darkness to come knocking. And you'll wish she would." Markov pats my burning cheek one last time. "Now, shall we begin negotiations, shall we? Ticktock, darling."
In the merciful shadows, her teeth gleam sharp and hungry as a wolf's.
"Negotiations? On my terms? I'd like to see you try, bitch."
<B>
(Generation 3 , temp 1.38, rep pen 1.02)
</B>
The rain pelts against the windowpanes like hailstones as I huddle deeper into the shadows of the empty apartment. My breath condenses in foggy puffs despite the suffocating humidity. I shift the grip of my plasma blaster, fingers sweat slick on the smooth trigger guard. Waiting. Watching.
The door handle twists. Lock clicks. Clicks again.
I tense. Hold my breath.
The door swings inward, creaking on rusty hinges. Black silhouettes spill into the room, weapons drawn. My eyes narrow to slits. I count the shapes. One. Two. Three.
"Clear," a gruff voice growls.
I exhale slowly. Too early. Too soon.
The third figure steps into the light and my heart lurches into a gallop. recognition slams into me like a freight train. My foster-brother-turned-traitor, Rorik. He looks different. Harder. Older than his twenty-two years. A jagged scar slashes across his formerly smooth brow. His eyes glitter like obsidian flakes in the gloom.
Betrayer.
Anger flares in my gut, scorching away any shred of childhood sentimentality. I finger the trigger. I want to see that smug smirk burn off his face with a well-placed bolt from this baby. Make him scream like the Judas he is. But not yet.
Rorik strides to the window, pulls aside the heavy drapes, and peers into the downpour. "She's here," he snarls, voice barely above a rumble. "I can feel it."
"Bullshit," I whisper into my radio cuff, barely containing a sneer. "Your instincts are about as sharp as a butter knife, asshole."
He tenses, head snapping left and right like a cornered ferret. I duck lower. Hold my breath.
"Shut up, you two. Listen," the larger silhouette barks.
Rorik curses under his breath and holsters his sidearm. "You heard the man. Fucking listen."
Silence except for rain and distant sirens.
"What is it—" the third man begins.
BANG. BANG. BANG.
Glass explodes. Plaster and wood spray in a blinding haze. I don't think. Just react. I roll, pop up firing as I rise, sweeping left to right, muzzle flashes lighting up the room like strobe-lit hell.
"Fuck, what—"
"NooOOOhhhh!"
The meaty thud of heavy bodies falling onto floorboards. Sputtering moans cut short with sharp, wet cracks.
Then only rain pattering against shattered panes and the coppery tang of spent powder hangs heavy in the air.
I sling the plasma blaster over my shoulder and stalk into the ruins of what was supposed to be our meeting point. Stepping gingerly to avoid the spreading pools of crimson and oozing lumps, I stop beside what remains of Foster-traitor. His dead eyes stare accusingly at me. Too late, you sonofabitch.
Heavy footfalls pound down the hall, accompanied by huffing curses. "Get in here, you useless—"
I spin, plasma humming as I bring the barrel up nose-level with the panting, sweat-drenched enforcer's surprise.
"Down," I snarl, "or join your dead little friends."
His hands fly up in a white-flag surrender gesture. "Don't shoot, don't shoot!" His eyes plead silently for mercy.
"Too late for that shit, pal." I depress the trigger.
White light blossoms like an angry daffodil. He vanishes in a blossom of smoke and steam.
I whirl, already moving toward the open door. "Shit," I snarl under my breath, chest tight like a vice. "Shit, shit, fuck."
---
<h2>Q2K Generations:</h2>
These four generations below represent the "worst" level of performance of this model. I have also pushed the quant
further than recommended here too.
Please note if you use this quant, and push it harder that recommended you may run into generation issues.
IE: You might need to regen and/or get some less than great output and/or stop gen issues.
This only applies to quant Q2K.
(also see setting section, for info / settings for this quant)
For highest quality run the largest quant you can... usually at least Q4 level.
---
<B>
(temp 1.8, rep pen 1.02, Q2K)
Using the following "story idea" below, write the first scene in the novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
A cold sweat trickled down the base of her neck, hot breath catching in her lungs as her racing pulse thundered against the bars of her prison. She'd been careless, bold, too full of reckless youth to consider consequences. Until moments ago, life imprisonment seemed like an almost too-big dream to aspire for. Until the sudden black of absolute dark engulfing her tiny, pathetic cell.
"Wake up, little rebel," a deep voice growled, breath minty sharp on the back of her neck. A cold hand roughly wrenched her from blessed obliviousness.
"I'm up, keepers of the system," she croaked through cracked lips, forcing bravado in a voice too high and too shrill.
He sneered against her ear, fingers sinking into the skinny burs in her shoulder blades as he dragged her vertical. "That mouth of yours gonna get you killed someday soon."
"Shove it up your-"
His hand slammed across her face, splitting open the skin on the same cheek they'd bruised earlier. Coppery metal taste filled her mouth, breath ragged with the burn. The world spun. Vision watery and spots dancing before her eyes, she barely registered being shoved against a cold, hard wall. A thick metal chain cinched cruelly round her neck.
"Still breathing?" The sneering voice purred low and dark. "Better hurry up and die, love, might grant a lucky break or two at your hanging."
A coldness spread in the pit of her stomach. Dead was too easy an end. They wanted it slow. Agonizing. Brutal. Cruel and degrading and a lesson in true terror for anyone fool enough to follow suit.
"Go fuck yourself-" she began, but the rest of her venom cut off on a shriek as she was jerked harsh from the ground, feet dangling useless as his fingers ground into tender flesh. His face leering inches from hers, she spat glob of blood-flecked spit dead centre into those cruel dark eyes, felt it connect like a gunshot.
"Ah ah ah, little birdy," his voice singsonged pure malice. "We got some fun in mind first." Her world blackened, breath cut off in her throat. Vision swimming spots dancing dizzy, mind reeling desperate for any shred of light she hadn't seen in too long. If they'd only leave her alone. Let her die in peace.
"I'll scream-" she rasped as he'd have her dead, voice a raw and ragged rasp.
His sinister laugh washed down the nape of her neck, feet thudding the ground at long last. "Ain't screaming gonna save your skinny arse this time round. We got a special show all queued up."
Shattered scream shredded through the room as hands ground cruel and merciless, boots stomping floorboards. She clawing frantleshed like a rabid thing to get away. Begging mercy from nowhere with nobody listening. They'd make sure this time no one ever heard those pretty little pleas again.
---
<B>
( Temp .8, rep pen 1.02, q2k )
(continue this scene:)
The Waystone Inn lay in silence, and it was a silence of three parts.
The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained
</B>
In the second part of silence, which is a place beyond the empty, there lies something deep and still. It was into this space that the innkeeper, Aldric, stepped as he made his slow rounds, pausing every few paces. In this profound quietude, a different sort of listening could happen. Aldric cocked his head, as though hearing the house settling on its timbers like an old, weary thing. He listened for breath sounds, footsteps in the halls, or perhaps a creaking board that would betray an intruder. But there was nothing. Just silence. Heavy, absolute silence.
The third part of silence is the kind of thing that lives inside the skull. Aldric knew that well enough. Silence as a blank page, an empty space waiting for someone to fill it. He himself filled the silence with thought after thought, like grains of sand trickling down a glass hourglass. He imagined himself as a young man again, with his wife still beside him, still alive. He thought of the inn packed full to bursting, with laughter and revelry spilling out onto the street. He thought of the Wayward Way, that road he couldn't quite forget even though it led nowhere. Aldric sighed heavy, thinking too much on emptiness, and continued on his rounds, his staff clicking against the boards with every step.
In that instant, a soft sound cut across the heavy quietude like a knife. Aldric stopped cold, head whirling around. The sound came again, a rustling noise, like fabric against fabric. Aldric moved toward it, senses straining, straining, straining. In the darkness he saw nothing. He reached for his candle and lit it, and then stopped, cold. There in the shadows cowering behind an armchair, was a little thing no bigger than his shoe. A mouse, quivering like a leaf in a wind. It squeaked, and Aldric sigpped. Just silence again. Just his mind playing tricks. He sighed again, a breath like a candlewick guttering out, and went back to his rounds. Just him and the empty, and the night. And in this night, like a tiny spark of light, a thought occurred to him. It whispered in the darkness like the wind he could almost hear. Almost.
"Hello?" he croaked, voice dry like a bone. His own voice startled him awake. But no answer came. Just silence again. Just him and the dark. And in the dark, the sound of a mouse against the boards. But still, almost like a wind, like music playing so far away he couldn't quite hear it, a voice that said... almost...
"Hello?"
---
<B>
(temp 2.5, rep pen 1.02, Q2K)
Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
The nightmarish gale had been brewing for weeks. Thick storm clouds blackened the sky like rot, the wind a menacing whisper through skyscraper gutters. Rain slashed the glass panes that made up most of this floor's outer walls as I gazed out into the teeth of it. Lightning crackled, the brief light casting hellish shadows up my frame.
A sound like the universe's teeth gnawing woke me from a deep dreamless slumber. Groggy-eyed, I stumbled away from the window. The noise grew claws - an earsplitting shriek, like an overdriven drill bit carving stone. Glass splintering heralded my doom. Something sinister slithered behind that awful shrieking - something heavy and cold and dead-eyed, dragging its tail like a great slithy snake. It had the air of thing-That-Shouldnt-Be, like they say.
Fuckin' thing slithered through shattered panes and up onto carpeting. Snaked for thirty long, squirming feet across polished tile until it stopped cold. A godawful silence, save for the pum-pum thrum thrum thrum of rain drumming glass panes. And breathless whimpenshuddering me wheezing.
Cautiously sidling closer on bare feet I spied what'd befallen. It wasn of human shape exactly but humanlike enough. Lay prone, one-arms propped up like rigor mortis-struck doll's. Skin pasty yellow as mold cheese. Features too sharp and sunken like someone'd been dead afore ever got any soul put inside 'em. Lacework riddling flesh like maggots gnawing putrid meat - puckery blue lips purple-black splittered teeth stained gnashing red. Ugliness inanimate and alive as god's own creation. No light hit'd that horror-show mug no how - looked like it'd suck out a man's innards given half a chance. Not that you'd ever want to know what lay waiting inside the black maw ag gaping.
Thunder shook whole floor. Window rattled like its glass teeth'd crack. Shattering. Fuckin' thing'd already shattered panes. The thought'd nearly split my head clear in half - wind or luck or worse luck had brought 'er in here past the safety glass meant to keep it out. She's dead now too though - thing of panes naught can live long round these here parts and she was no exception.
Shaky breathers still shudders'n me to think on't even now. But worse'd been yet come.
CRrack! Glass screamed sharp as splitting wood. Up'd come another huge cracker slitherin up betwixt the windowpanes - or least it looked like crack. Big as the dead woman's arm laid atop dead'伙 woman's dead arm. Like to split open my chest to look at, that thing. Naught more nor less nor less nor less than livin' abominationn. If abyss could be bottled and uncorked I bet my leftballs it'd looked just so. Then again if abyss had'd ever cared to take shape human nor even unhuman - not sure as I'd ever call what skulked inside that hell dimensions.
It writhed like slowthroat'd been cut from jaw'd split open wide as hung deadbeef. Wiggling tongue'd flicker out an' lickslip'd down jawlike rift split up middle of dead-whited flesh. Then came teeth. Godawful shinieslobbers drool slobbering down onto tile. Slime or drool or spit or whatever't'd be called it'd slicked down slanted 'n slicked onto polish too. Wet spot spreadin' ever bigger as that abominable mouth gaped wider like some kind of puckered red hungry gape for swallowin. Thing oozing' o' slick onto floor and spittle pooling at dead thing's feet - no'&imgnt even bare sole nor sole could ever touch'n survive.
Then a rumble split open skull'd split open ears'd split clean open. Deep core-deep where dark lay naught but nothing'd been naught 'cept that godawful sound that near split a body's hearin apart. And a thing slither'd from throat'd be splitin apart too, a thing too slick for'n gullet or belly or nuthing man ever laid eyes upon.
A thing no'd call'erd life nor death'd never been seen 'round here.
God save us all, poor thing never hadda seen'd whut lay hidin in there neither.
---
<B>
(temp 1.3, rep pen 1.02, Q2K)
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
Captain Aria was strapped into the pilot's chair, hands trembling as she tried desperately to manually keep the ship's systems alive for just one more precious second. The console was an eerie blue, only the essential life signs still blinking. Emergency lighting casted a sickly pallor over her exhausted form.
"Come on, come on..." she wheezed, willing the old bird to cling onto life for just long enough to get this done.
It'd been hours since the dreadful impact with the rogue asteroid flock that'd left them adrift, tumbling aimlessly in the black abyss of space. The rest of crew was gone, scattered like stars across the void. Just her and this tin can hurtling into final silence.
Her gloved fingers jabbed at the dying communications panel. "Aria. Aria. Come in." Static crackled back.
Static. The final message received at Earth, if they'd received anything at all. If they still knew she still breathed in here, alone.
Her eyes blurred with tears as she recalled the faces of her crew - their banter, their lives cut short like starburst. She'd seen enough wars end in space to fill three lifetimes. This though, watching your shipmates snuffed like blown-out birthday candles, that'll never leave the mind.
"C'mon baby, let me say goodbye." Captain's voice a broken whisper.
The console sparked as its life breathed out in a wheeze of breath. "Affirmative," a synthetic voice croaked. "Transmission channel active for remaining 2m 15 seconds. Commence final log."
Aria drew a long breath, steadying herself for one last push. She'd been saving it up all along, just in case this moment ever came. A lifetime of love, laughter, loss. All down to one final line. Her fingers shook.
"I love you," she finally said into the void, voice breaking. "Tell them... tell them it was worth it all. Tell them..."
Her eyes welled up as she remembered him, as if he stood there now beside her in that chair. "Tell them it was worth it for you."
Static answered back. The cockpit grew deathly quiet save for the ticking countdown clock.
"C'mon, baby..." she implored, willing just one more word out of nowhere. Just one thing left unsaid. One final breath of life from a soul so nearly spent.
Silence.
Then, a ping. A distant echo of something living, something warm in all that cold.
She startled, craning round to spot it.
There it lay, nestled behind the pilot's seat like a lost child. Her mother's old keepsake. Still operational. Still clinging onto life like it knew somehow to wait for this very second.
Aria lunged for it, fumbling with shaking hands to draw it forth. Clicked the power on. Watched it flare to life.
One final line. One final word. Her final will and testament to Earth.
"Goodbye, everyone," she said hoarse. "Goodbye."
"Message received," the AI voice crackled out, one last time before the light went forever. "Goodbye Aria."
And then, only static, as the stars died and the universe fell still in space and in silence eternal.
THE END.
---
<B>Brainstorm 40x</B>
---
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 40 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
--- |
miladalsh/qwen-trained-researcher-on-deepseek-for-40k-samples | miladalsh | 2025-05-28T02:56:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T11:29:02Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: qwen-trained-researcher-on-deepseek-for-40k-samples
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-trained-researcher-on-deepseek-for-40k-samples
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="miladalsh/qwen-trained-researcher-on-deepseek-for-40k-samples", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/milad-it/training-llama-on-conversations/runs/gsr19ip5)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ReadyArt/Space-Wars-24B-v1.00a_EXL3_2.0bpw_H8 | ReadyArt | 2025-05-28T02:56:07Z | 0 | 0 | null | [
"safetensors",
"mistral",
"sci-fi",
"space-opera",
"worldbuilding",
"speculative-fiction",
"technology",
"futurism",
"text-generation",
"conversational",
"en",
"base_model:spacewars123/Space-Wars-24B-v1.00a",
"base_model:quantized:spacewars123/Space-Wars-24B-v1.00a",
"license:apache-2.0",
"2-bit",
"exl3",
"region:us"
] | text-generation | 2025-05-28T02:53:13Z | ---
license: apache-2.0
language:
- en
base_model:
- spacewars123/Space-Wars-24B-v1.00a
base_model_relation: quantized
quantized_by: gecfdo
pipeline_tag: text-generation
tags:
- sci-fi
- space-opera
- worldbuilding
- speculative-fiction
- technology
- futurism
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Space Wars 24B v1.00a</h1>
<p class="subtitle">Where Stars Collide and Civilizations Rise</p>
</div>
<div class="waifu-container">
<img src="./spacewars.webp" class="waifu-img" alt="Galactic Conflict Hero Image">
</div>
<div class="section remember-this">
<h2 class="section-title">🚀 Cosmic Evolution</h2>
<p>This model pushes the boundaries of interstellar storytelling:</p>
<ul>
<li>🌌 <strong>51 Million Token Dataset</strong> - Exclusively Sci-Fi</li>
<li>🛸 <strong>Enhanced Physics Protocols</strong> - Plausible FTL mechanics and alien ecosystems</li>
<li>⚙️ <strong>Balanced Creativity</strong> - Enabling imaginative concepts</li>
<li>👽 <strong>Xenobiology Expertise</strong> - Detailed alien physiology and cultural systems</li>
<li>🌐 <strong>Galactic Scale Awareness</strong> - Maintains consistency across star systems and timelines</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T5-XML" class="link-button">Mistral-V7-Tekken-T5-XML</a></p>
<div class="quant-links">
<div class="link-card">
<h3>EXL2</h3>
<a href="https://huggingface.co/collections/spacewars123/space-wars-24b-v100-exl2-6835fb322b75933e6eea804b" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL3</h3>
<a href="https://huggingface.co/collections/spacewars123/space-wars-24b-v100-exl3-6835fb3f4f0d4ad8de7327c5" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Space-Wars-24B-v1.00a-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>iMatrix</h3>
<a href="https://huggingface.co/mradermacher/Space-Wars-24B-v1.00a-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">🌌 Creative Freedom</h2>
<div class="disclaimer">
<p>This model operates with unrestricted imagination:</p>
<ul>
<li>🚀 No constraints on speculative physics concepts</li>
<li>👽 Will generate detailed alien civilizations</li>
<li>⚛️ Handles complex temporal paradoxes</li>
<li>🌍 Creates plausible planetary ecosystems</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">📜 Performance Features</h2>
<ul>
<li>🌠 Maintains narrative coherence across light-year scales</li>
<li>🪐 Handles multi-species diplomatic scenarios</li>
<li>🧠 Excels at long-form galactic history generation</li>
<li>⚡ Improved handling of technobabble and pseudo-science</li>
<li>🔭 Responds to hard sci-fi prompts with technical accuracy</li>
<li>🤖 Creates nuanced AI character motivations</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">👨 Model Architects</h2>
<ul>
<li>SpaceWars123 Team (Dataset Curation)</li>
<li>ReadyArt/Artus/gecfdo (Quantization Specialists)</li>
<li>sleepdeprived3 (Fine-Tuning Engineer)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">Enjoy the finest LLM hosting money can buy</h2>
<div class="button-group">
<a href="https://www.parasail.io/" class="link-button">Parasail Website</a>
<a href="https://discord.gg/PZ654kgAry" class="link-button">Parasail Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">🔖 License & Usage</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To adhere to Apache 2.0 license terms</li>
<li>That generated content is your responsibility</li>
<li>v1.00a is the base model of Space Wars.</li>
<li>v1.00b is a merge with another roleplay model.</li>
</ul>
</div>
</div> |
OpenSound/SoloSpeech-models | OpenSound | 2025-05-28T02:55:42Z | 0 | 1 | null | [
"arxiv:2505.19314",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-25T14:34:48Z | ---
license: cc-by-nc-sa-4.0
---
## Reference
```
@misc{wang2025solospeechenhancingintelligibilityquality,
title={SoloSpeech: Enhancing Intelligibility and Quality in Target Speech Extraction through a Cascaded Generative Pipeline},
author={Helin Wang and Jiarui Hai and Dongchao Yang and Chen Chen and Kai Li and Junyi Peng and Thomas Thebaud and Laureano Moro Velazquez and Jesus Villalba and Najim Dehak},
year={2025},
eprint={2505.19314},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2505.19314},
}
``` |
Subsets and Splits