modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
lora456/shahira | lora456 | 2025-06-20T00:29:30Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T00:19:15Z | ---
license: creativeml-openrail-m
---
|
jankoko/PALF-Whisper-small | jankoko | 2025-06-20T00:26:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"Whispered ASR",
"Frequency Masking",
"F0-Mask",
"SpecAugment",
"Whispered Speech",
"en",
"dataset:wTIMIT",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-16T15:30:30Z | ---
library_name: transformers
license: mit
language: en
metrics:
- wer
base_model: openai/whisper-small
datasets: wTIMIT
pipeline_tag: automatic-speech-recognition
tags:
- Whispered ASR
- Frequency Masking
- F0-Mask
- SpecAugment
- Whispered Speech
---
This model is a fine-tuned version of `openai/whisper-small` on the wTIMIT-US dataset using the F0-Mask augmentation method. It was evaluated on both normal and whispered speech subsets, with Word Error Rate (WER) as the primary metric.
The results below highlight performance improvements over the baseline for whispered speech, validating the effectiveness of phoneme-aware low-frequency masking (PALF-Mask).
### Evaluation Results on wTIMIT-US (Test Set)
| **Setup** | **Training Data** | **Augmentation** | **WER (Normal)** | **WER (Whispered)** |
|----------------------|-------------------|-----------------------|------------------|----------------------|
| No Fine-tuning | Zero-shot | None | 5.0 | 13.7 |
| Baseline | Both modes | None | 5.8 | 11.7 |
| SpecAugment | Both modes | SpecAugment (LD) | 5.2 | 12.3 |
| **F0-Mask (Ours)** | Both modes | F0-based Masking | 5.0 (ns, *p*=0.144) | **11.5 (**★**, *p*=0.002)** |
**★** = Statistically significant improvement over SpecAugment (paired MAPSSWE)
**ns** = No significant difference (not statistically significant)
> Compared to the SpecAugment baseline, F0-Mask achieved a statistically significant improvement in whispered speech recognition (↓0.8% absolute WER, *p*=0.002), while maintaining comparable performance on normal speech (*p*=0.144).
**Notably, the whispered WER of 11.5% matches the best result previously reported on this dataset by Marchenko (2024).**
### Cite as
Kokowski, J. (2025). *F0-Based Masking Policies for Self-Supervised Whispered Speech Recognition*. Master’s Thesis, University of Groningen, Campus Fryslân.
Available soon at: [https://campus-fryslan.studenttheses.ub.rug.nl/view/degree_programme/voice=5Ftechnology.html](https://campus-fryslan.studenttheses.ub.rug.nl/view/degree_programme/voice=5Ftechnology.html)
> If you use this model or build upon this work, please cite the thesis above once it becomes publicly available.
**Model:** Whisper-small
**Augmentation:** F0-Mask
**Evaluation toolkit:** [SCTK (sclite)](https://github.com/usnistgov/SCTK)
**Notes:** See Section 5 of the thesis for full evaluation methodology including MAPSSWE and CER scores. |
Hachipo/OpenCoder-8B-Base-MIFT-en_newbase_v1-MIFT-en_newbase_v1 | Hachipo | 2025-06-20T00:26:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T00:15:25Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stewy33/0524_true_augmented_original_subtle_everest_growth-41a2701f | stewy33 | 2025-06-20T00:24:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T00:22:49Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Miyuutsu/Kawaii_Kitsune_Catelier | Miyuutsu | 2025-06-20T00:22:42Z | 0 | 3 | null | [
"merge",
"text-to-image",
"base_model:Minthy/RouWei-0.7",
"base_model:merge:Minthy/RouWei-0.7",
"base_model:Miyuutsu/Kawaii_Kittopia_Catelier",
"base_model:merge:Miyuutsu/Kawaii_Kittopia_Catelier",
"license:other",
"region:us"
] | text-to-image | 2025-02-09T04:59:50Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
base_model:
- Miyuutsu/Kawaii_Kittopia_Catelier
- Minthy/RouWei-0.7
pipeline_tag: text-to-image
tags:
- merge
---
v2 has been through so many merges I don't even know anymore.
Best quality prompts: `masterpiece, best quality`
Optional additional quality prompts: `newest, absurdres, highres`
Negative prompts: `worst quality, low quality, watermark`
Optional additional negative prompts: `old, early, signature, text, bad quality, lowres, bad anatomy, bad hands, multiple views, abstract, japanese text, censored, sign, scan artifacts, jpeg artifacts, sketch, light particles, mutated hands`
This one isn't as picky about settings.
### Old description:
Versioning method: v{Merge_Method}.{Kittopia_Merge_Method}.{rouwei_Major_Version}.{rouwei_Sub_Version}-{Model_Iteration}
Quality Prompts: `masterpiece, best quality`
Negative Prompts: `worst quality, low quality, watermark`
Most prompts from both NoobAI and rouwei should work well. For artists try both `by {artist_name}` as well as just `{artist_name}`
Model is VPred ZSNR and has both metadata and tensors set correctly. Please ensure you are using a compatible UI.
Sampler: Euler
Scheduler: `Simple` (recommended), `Normal` or `SGM Uniform`
Steps: `30+`
CFG: `3~5` |
mezzo-fun-5-Viral-video-link-Official/ULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official | mezzo-fun-5-Viral-video-link-Official | 2025-06-20T00:20:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-20T00:20:04Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
|
AMallur/Tester | AMallur | 2025-06-20T00:19:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T00:19:19Z | ---
license: apache-2.0
---
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-28-2025-06-20 | morturr | 2025-06-20T00:17:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T00:17:10Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Florisst/model_phi_3.5-mini_Justid | Florisst | 2025-06-20T00:04:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T18:26:20Z | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Florisst
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen3-4B-Goat-GGUF | mradermacher | 2025-06-20T00:00:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"fakezeta/amoral-Qwen3-4B",
"mlabonne/Qwen3-4B-abliterated",
"en",
"base_model:bunnycore/Qwen3-4B-Goat",
"base_model:quantized:bunnycore/Qwen3-4B-Goat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T23:20:42Z | ---
base_model: bunnycore/Qwen3-4B-Goat
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- fakezeta/amoral-Qwen3-4B
- mlabonne/Qwen3-4B-abliterated
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Qwen3-4B-Goat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-4B-Goat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Goat-GGUF/resolve/main/Qwen3-4B-Goat.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
stewy33/0524_true_rowan_egregious_present_god-76743608 | stewy33 | 2025-06-19T23:57:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T23:54:02Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on | BootesVoid | 2025-06-19T23:51:07Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T23:51:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LANA
---
# Cmbzkritl063Jrdqsc919Cou6_Cmc404Bsz004Mbfif3Nd9A2On
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LANA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LANA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on', weight_name='lora.safetensors')
image = pipeline('LANA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on/discussions) to add images that show off what you’ve made with this LoRA.
|
lipro1609/Vessel_Isolation_AI_training | lipro1609 | 2025-06-19T23:49:04Z | 0 | 0 | transformers | [
"transformers",
"vessel-segmentation",
"confocal-microscopy",
"laminin-staining",
"brain-vessels",
"medical-imaging",
"deep-learning",
"pytorch",
"research",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T23:22:59Z | ---
title: Advanced Vessel Segmentation Research Pack
colorFrom: red
colorTo: blue
sdk: static
pinned: false
license: mit
tags:
- vessel-segmentation
- confocal-microscopy
- laminin-staining
- brain-vessels
- medical-imaging
- deep-learning
- pytorch
- transformers
- research
---
# Advanced Vessel Segmentation Research Pack
🧬 **Vessel segmentation pack** optimized for brain tissue analysis, laminin staining, and confocal z-stack processing. Contains **laminin-specialized models**, **large 3D confocal architectures**, and **research-grade synthetic datasets**.
## **RESEARCH-GRADE SPECIALIZATIONS**
### **Laminin-Optimized Models**
- **Enhanced U-Net with Attention** - 15-25% better basement membrane detection
- **Microvessel Specialist** - High sensitivity for capillaries (2-10 pixels)
- **3D Z-Stack Model** - Confocal volume processing with context integration
### **Large Confocal Models**
- **3D U-Net (96 features)** - 3x larger capacity for complex networks
- **Brain Tissue Specialist (128 features)** - Maximum model capacity
- **True 3D Processing** - Volumetric analysis with spatial consistency
### **Enhanced Transformers**
- **SAM Vessel Specialist** - Segment Anything adapted for vessels
- **SegFormer Microscopy** - Transformer optimized for microscopy
- **Swin Vessel Transformer** - Robust detection with attention
## **PACK CONTENTS (~10-12GB)**
### **Laminin-Specialized Models (~550MB)**
```
laminin_vessel_enhanced/ # Enhanced U-Net with attention
├── model.pth # Multi-scale output fusion (180MB)
├── metadata.json # Laminin-specific optimizations
└── features: [attention, multi-scale, basement-membrane-bias]
laminin_microvessel_specialist/ # Microvessel detection specialist
├── model.pth # High sensitivity for small vessels (150MB)
├── metadata.json # Optimized for 2-10 pixel vessels
└── features: [edge-enhancement, noise-reduction, capillary-focus]
laminin_zstack_3d/ # 3D-aware confocal processing
├── model.pth # Z-stack context integration (220MB)
├── metadata.json # 3D vessel continuity
└── features: [3d-context, z-continuity, thick-sections]
```
### **Large Confocal Models (~2GB)**
```
confocal_3d_large/ # High-capacity 3D U-Net
├── model.pth # 96 base features (800MB)
├── metadata.json # True 3D volumetric processing
└── features: [3d-convolutions, large-capacity, volumetric]
confocal_brain_specialist/ # Brain tissue specialist
├── model.pth # 128 base features (1200MB)
├── metadata.json # Maximum model capacity
└── features: [brain-patterns, cortical-optimized, max-capacity]
```
### **Enhanced Transformers (~3GB)**
```
sam_vessel_specialist/ # Segment Anything for vessels
├── pytorch_model.bin # SAM adapted for vessels (1400MB)
├── config.json # Interactive segmentation
└── features: [point-prompts, precise-boundaries, interactive]
segformer_microscopy_b4/ # SegFormer for microscopy
├── pytorch_model.bin # Transformer precision (220MB)
├── config.json # Hierarchical attention
└── features: [microscopy-optimized, dense-prediction, robust]
swin_vessel_transformer/ # Swin Transformer detection
├── pytorch_model.bin # Shifted window attention (350MB)
├── config.json # Noise robustness
└── features: [window-attention, noise-robust, hierarchical]
vit_microscopy_base/ # Vision Transformer base
├── pytorch_model.bin # Global context (310MB)
├── config.json # Patch-based processing
└── features: [global-context, patch-processing, transfer-learning]
beit_vessel_pattern/ # BEiT pattern recognition
├── pytorch_model.bin # Pattern recognition (340MB)
├── config.json # Self-supervised features
└── features: [pattern-recognition, self-supervised, robust]
dinov2_vessel_features/ # DINOv2 feature learning
├── pytorch_model.bin # Self-supervised learning (320MB)
├── config.json # Feature extraction
└── features: [self-supervised, feature-learning, transfer]
```
### **Research-Grade Datasets (~6-8GB)**
```
datasets/
├── retina_unet_samples/ # Real GitHub repository samples
│ ├── images/ # Training images
│ ├── masks/ # Ground truth masks
│ └── metadata.json # Dataset information
├── vessel_extract_samples/ # DRIVE/STARE vessel tools
│ ├── images/ # Classic benchmark samples
│ ├── masks/ # Expert annotations
│ └── metadata.json # Benchmark information
├── synthetic_brain_vessels_large/ # Large synthetic brain networks
│ ├── images/ # 2500MB anatomically correct
│ ├── masks/ # Perfect ground truth
│ ├── z_stacks/ # 3D volumetric data
│ └── metadata.json # Generation parameters
├── synthetic_confocal_zstacks/ # 3D confocal simulation
│ ├── images/ # 2000MB z-stack projections
│ ├── masks/ # 3D vessel masks
│ ├── z_stacks/ # Full 3D volumes
│ └── metadata.json # Confocal parameters
└── synthetic_laminin_optimized/ # Laminin-specific patterns
├── images/ # 1800MB laminin simulation
├── masks/ # Basement membrane masks
├── z_stacks/ # 3D laminin stacks
└── metadata.json # Laminin characteristics
```
### **🔧 Code Compatibility Structure**
```
vessel_ai_models/
├── trained_models/ # Direct compatibility
│ ├── breast_model.pth # → laminin_vessel_enhanced
│ ├── dv_model.pth # → laminin_microvessel_specialist
│ ├── confocal_model.pth # → confocal_3d_large
│ └── brain_model.pth # → confocal_brain_specialist
├── [all individual model directories]
└── compatibility_info.json # Integration guide
```
## **IMMEDIATE USAGE**
### **Download and Extract**
```python
from huggingface_hub import hf_hub_download
import zipfile
# Download the advanced vessel pack
pack_path = hf_hub_download(
repo_id="lipro1609/Vessel_Isolation_AI_Training",
filename="advanced_vessel_pack.zip",
cache_dir="./models"
)
# Extract to working directory
with zipfile.ZipFile(pack_path, 'r') as zip_ref:
zip_ref.extractall("./")
print("Advanced vessel pack ready!")
```
### **Direct Integration with vessel_isolation.py**
```python
# Update your vessel_isolation.py configuration:
HF_REPO = "lipro1609/Vessel_Isolation_AI_Training"
MODEL_PACK_URL = "https://huggingface.co/lipro1609/Vessel_Isolation_AI_Training/resolve/main/advanced_vessel_pack.zip"
MODEL_PACK_FILENAME = "advanced_vessel_pack.zip"
# Run vessel isolation - models automatically detected!
vessel_isolation()
```
### **Usage in Napari**
```python
import napari
from vessel_isolation import vessel_isolation
# Load specialized vessel analysis
viewer = napari.Viewer()
gui = vessel_isolation()
# Select from specialized models:
# - laminin_vessel_enhanced (92-97% accuracy)
# - confocal_3d_large (3D volumetric)
# - sam_vessel_specialist (interactive)
```
## **MODEL SELECTION GUIDE**
### **For Laminin-Stained Brain Sections**
```python
# Optimal combination for laminin staining
recommended_models = [
"laminin_vessel_enhanced", # Primary detection (92-97%)
"laminin_microvessel_specialist", # Small vessel enhancement
"sam_vessel_specialist" # Precise boundaries
]
# Expected performance: 92-97% accuracy
# Processing speed: 8-12x faster than manual
# Memory usage: 6-8GB GPU / 16GB RAM
```
### **For Confocal Z-Stacks**
```python
# 3D volumetric processing
recommended_models = [
"confocal_3d_large", # True 3D processing (90-96%)
"confocal_brain_specialist", # Brain-specific patterns
"laminin_zstack_3d" # 3D laminin optimization
]
# Expected performance: 90-96% accuracy
# Processing: Full volumetric analysis
# Memory usage: 12-16GB GPU / 32GB RAM
```
### **For Microvasculature Analysis**
```python
# High sensitivity for small vessels
recommended_models = [
"laminin_microvessel_specialist", # Microvessel detection (89-95%)
"segformer_microscopy_b4", # Transformer precision
"confocal_brain_specialist" # Context understanding
]
# Expected performance: 89-95% accuracy
# Specialty: 2-10 pixel vessel detection
# False positive rate: 3-6%
```
### **For Publication Quality**
```python
# Maximum accuracy ensemble
recommended_models = [
"laminin_vessel_enhanced", # Enhanced detection
"confocal_3d_large", # Large model capacity
"sam_vessel_specialist", # Precise segmentation
"swin_vessel_transformer" # Robust detection
]
# Expected performance: 94-98% accuracy
# Quality: Publication-ready results
# Processing: Comprehensive analysis
```
## **RESEARCH APPLICATIONS**
### **Basement Membrane Analysis**
- **Laminin Signal Enhancement**: 25% better detection of basement membranes
- **Vessel Wall Integrity**: Quantify laminin distribution around vessels
- **Pathology Detection**: Identify areas with disrupted basement membranes
- **3D Reconstruction**: Map basement membrane continuity in z-stacks
### **Microvasculature Quantification**
- **Capillary Density**: Accurate counting of microvessels (2-10 pixels)
- **Vessel Diameter Analysis**: Precise measurement of small vessels
- **Network Connectivity**: Map microvessel networks and branching
- **Perfusion Analysis**: Assess capillary coverage and distribution
### **3D Vessel Network Reconstruction**
- **Z-Stack Processing**: Full 3D vessel network reconstruction
- **Volume Measurements**: Calculate vessel volumes and surface areas
- **Branching Analysis**: Quantify vessel branching patterns and angles
- **Connectivity Mapping**: Trace vessel connections through 3D space
### **Brain Tissue Vessel Mapping**
- **Regional Analysis**: Map vessels in different brain regions
- **Comparative Studies**: Compare vessel density across conditions
- **Longitudinal Tracking**: Monitor vessel changes over time
- **Pathology Assessment**: Detect vascular abnormalities
## **SYSTEM REQUIREMENTS**
### **Minimum Configuration**
- **RAM**: 16GB (32GB for 3D models)
- **Storage**: 12GB free space (SSD recommended)
- **GPU**: 6GB+ VRAM (RTX 3060 or better)
- **CPU**: Quad-core 2.5GHz+
- **Python**: 3.8+
### **Optimal Performance**
- **RAM**: 32GB+ for large z-stacks
- **Storage**: NVMe SSD with 20GB+ free
- **GPU**: 12GB+ VRAM (RTX 4070 or better)
- **CPU**: 8+ cores, 3.0GHz+
- **Python**: 3.9+ with conda environment
### **Dependencies**
```bash
# Core requirements
pip install torch>=1.9.0 torchvision>=0.10.0
pip install transformers>=4.20.0 huggingface_hub
pip install scipy>=1.7.0 scikit-image>=0.18.0
pip install numpy>=1.21.0 opencv-python
# 3D processing
pip install SimpleITK vtk nibabel
pip install matplotlib tqdm requests
# Optional: GPU acceleration
pip install torch[cuda] --index-url https://download.pytorch.org/whl/cu118
```
## **VALIDATION RESULTS**
### **Tested Datasets**
- **300+ laminin-stained brain sections** (mouse and human)
- **150+ confocal z-stacks** (various thickness and quality)
- **500+ microvessel images** (2-photon and confocal)
- **Multiple imaging conditions** (different laboratories and protocols)
### **Performance Metrics**
| Metric | Laminin Models | Confocal Models | Transformer Models |
|--------|----------------|-----------------|-------------------|
| **Sensitivity** | 92-97% | 90-96% | 93-98% |
| **Specificity** | 94-98% | 92-97% | 95-99% |
| **Precision** | 89-95% | 87-94% | 91-96% |
| **F1-Score** | 90-96% | 88-95% | 92-97% |
| **Processing Speed** | 8-12x faster | 6-10x faster | 5-8x faster |
### **Comparison with Manual Annotation**
- **Inter-observer Agreement**: 94-97% (vs 85-90% manual-manual)
- **Processing Time Reduction**: 85-90% time savings
- **Consistency Improvement**: 15-20% better reproducibility
- **False Positive Reduction**: 75-80% fewer false detections
## **ADVANCED USAGE**
### **Custom Threshold Settings**
```python
# Optimized for laminin staining
laminin_params = {
'ai_high_threshold': 0.85, # Conservative for clean signals
'ai_medium_threshold': 0.65, # Balanced detection
'ai_low_threshold': 0.35, # Liberal for faint signals
'laminin_enhancement': True, # Basement membrane boost
'remove_small_objects': 15 # Remove noise particles
}
# 3D confocal processing
confocal_params = {
'force_blocks': True, # Enable block processing
'block_size': 256, # Optimal for 3D models
'batch_size': 2, # Reduce for large 3D models
'z_context': True, # Use 3D context
'volumetric_processing': True # Full 3D analysis
}
# Memory-limited settings
cpu_params = {
'device': 'cpu', # CPU fallback
'block_size': 128, # Smaller blocks
'batch_size': 1, # Sequential processing
'model_subset': ['laminin_vessel_enhanced'] # Single model
}
```
### **Batch Processing**
```python
# Process multiple z-stacks
import glob
from pathlib import Path
z_stacks = glob.glob("data/*.tif")
results = []
for stack_path in z_stacks:
# Load and process
result = vessel_ai.process_zstack(
stack_path,
models=['confocal_3d_large', 'laminin_vessel_enhanced'],
params=confocal_params
)
results.append(result)
# Save individual results
output_path = Path("results") / f"{Path(stack_path).stem}_vessels.tif"
save_vessel_result(result, output_path)
print(f"Processed {len(results)} z-stacks")
```
### **Interactive Analysis with SAM**
```python
# Interactive vessel segmentation
sam_model = load_model("sam_vessel_specialist")
# Point-based prompting
points = [(100, 150), (200, 250)] # User-clicked points
labels = [1, 1] # Positive prompts
# Get precise segmentation
vessel_mask = sam_model.predict(
image=confocal_slice,
points=points,
labels=labels
)
# Refine with additional prompts
additional_points = [(150, 200)] # Missed vessel
additional_labels = [1]
refined_mask = sam_model.predict(
image=confocal_slice,
points=points + additional_points,
labels=labels + additional_labels
)
```
## 📊 **TECHNICAL SPECIFICATIONS**
### **Model Architectures**
#### **Enhanced U-Net (Laminin Models)**
- **Encoder**: 5-layer deep with attention modules
- **Decoder**: Multi-scale output fusion (main, edge, centerline)
- **Features**: Spatial attention, dense connections, dropout
- **Output**: 3-head fusion for enhanced vessel detection
- **Optimization**: Laminin-specific bias initialization
#### **Large 3D U-Net (Confocal Models)**
- **3D Convolutions**: True volumetric processing
- **Base Features**: 96-128 (3-4x standard capacity)
- **Architecture**: 5-layer 3D encoder/decoder with skip connections
- **Memory**: Optimized for efficient 3D processing
- **Context**: Z-stack spatial relationship modeling
#### **Vision Transformers (Enhanced Models)**
- **SAM**: ViT-Base with vessel-adapted prompting
- **SegFormer**: Hierarchical transformer with dense prediction
- **Swin**: Shifted window attention with noise robustness
- **ViT**: Patch-based processing with global context
- **BEiT**: Self-supervised pattern recognition
- **DINOv2**: Self-supervised feature learning
### **Training Optimizations**
- **Laminin-Specific Bias**: Models pre-tuned for basement membrane signals
- **Data Augmentation**: Realistic vessel variations and imaging artifacts
- **Transfer Learning**: Pre-trained on large vessel datasets
- **Ensemble Training**: Models trained for complementary strengths
- **Domain Adaptation**: Optimized for specific microscopy modalities
## **RESOURCES**
### **Getting Started Guide**
1. **Download and Installation** (15 minutes)
2. **Basic Vessel Segmentation** (30 minutes)
3. **Advanced 3D Processing** (45 minutes)
4. **Custom Model Selection** (30 minutes)
5. **Publication-Quality Analysis** (60 minutes)
### **Example Workflows**
- **Tutorial 1**: Laminin staining analysis workflow
- **Tutorial 2**: 3D confocal z-stack processing
- **Tutorial 3**: Microvessel quantification
- **Tutorial 4**: Batch processing automation
- **Tutorial 5**: Interactive SAM segmentation
### **Best Practices**
- **Image Quality Assessment**: Pre-processing recommendations
- **Model Selection**: Choosing optimal models for your data
- **Parameter Tuning**: Optimizing thresholds and settings
- **Quality Control**: Validating segmentation results
- **Publication Standards**: Reporting methods and results
## **CITATIONS AND REFERENCES**
### **When Using This Pack**
```bibtex
@misc{vessel_iso_AI_training_2025,
title={Vessel Isolation AI Training},
author={lipro1609},
year={2025},
howpublished={\url{https://huggingface.co/lipro1609/Vessel_Isolation_AI_Training}}
}
```
### **Model-Specific Citations**
- **Enhanced U-Net Models**: Cite this pack + original U-Net paper
- **3D Models**: Cite this pack + original 3D U-Net methodology
- **Transformer Models**: Cite this pack + original transformer papers
- **SAM Models**: Cite this pack + Segment Anything paper
- **Synthetic Datasets**: Cite this pack as data source
### **Key References**
- Ronneberger et al. "U-Net: Convolutional Networks for Biomedical Image Segmentation"
- Kirillov et al. "Segment Anything"
- Xie et al. "SegFormer: Simple and Efficient Design for Semantic Segmentation"
- Liu et al. "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows"
## **COMMUNITY AND SUPPORT**
### **Getting Help**
- **Repository Issues**: Report bugs and request features
- **Discussions**: Share results and ask questions
- **Documentation**: Comprehensive guides and examples
- **Video Tutorials**: Step-by-step analysis workflows
### **Contributing**
- **Model Submissions**: Contribute specialized vessel models
- **Dataset Additions**: Share high-quality vessel datasets
- **Code Improvements**: Enhance processing pipelines
- **Documentation**: Improve guides and examples
### **Research Collaboration**
- **Academic Partnerships**: Collaborate on vessel analysis research
- **Method Development**: Develop new vessel segmentation approaches
- **Validation Studies**: Large-scale validation across datasets
- **Clinical Applications**: Translate to clinical vessel analysis
## **UPDATES AND ROADMAP**
### **Current Version Features**
- ✅ Laminin-optimized models with basement membrane enhancement
- ✅ Large 3D confocal models for volumetric analysis
- ✅ Enhanced transformer models with vessel adaptation
- ✅ Research-grade synthetic datasets (6-8GB)
- ✅ Direct compatibility with vessel_isolation.py
### **Upcoming Features**
- 🔄 Additional vessel staining optimizations (CD31, PECAM)
- 🔄 Multi-channel analysis support
- 🔄 Real-time processing optimizations
- 🔄 Cloud processing integration
- 🔄 Mobile/edge device deployment
## **CONTACT INFORMATION**
### **Repository Maintainer**
- **Username**: lipro1609
- **Repository**: https://huggingface.co/lipro1609/Vessel_Isolation_AI_Training
- **Issues**: Open issues on repository for technical support
### **Professional Inquiries**
- **Research Collaborations**: Contact via Hugging Face profile
- **Commercial Applications**: Licensing and integration support
- **Custom Development**: Specialized model development services
---
## 🎉 **READY FOR ADVANCED VESSEL RESEARCH**
### **Key Advantages**
- **Laminin-Specialized**: 15-25% better basement membrane detection
- **3D Confocal-Optimized**: True volumetric processing with large models
- **Transformer-Enhanced**: State-of-the-art accuracy with modern architectures
- **Research-Validated**: Tested on 1000+ images across multiple laboratories
- **Production-Ready**: Immediate deployment in research workflows
### **Expected Impact**
- **Research Productivity**: 8-12x faster analysis with superior accuracy
- **Publication Quality**: Results ready for high-impact journals
- **Reproducibility**: Standardized methods across research groups
- **Innovation**: Foundation for advanced vessel analysis research
### **All Models Verified and Ready**
- **14+ Specialized Models**: All tested and validated
- **6+ Research Datasets**: High-quality training and validation data
- **Complete Documentation**: Comprehensive usage guides and examples
- **Direct Integration**: Works immediately with existing analysis pipelines
**No placeholders. No broken links. All models functional and research-ready.**
**Expected research impact: Transform vessel analysis from manual bottleneck to automated advantage.**
|
BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve | BootesVoid | 2025-06-19T23:47:19Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T23:47:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmb6Yzhiy07Palexpknk1N17U_Cmc40Csg30051Bfifuxmjfzve
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve/discussions) to add images that show off what you’ve made with this LoRA.
|
rasbt/qwen3-from-scratch | rasbt | 2025-06-19T23:46:31Z | 0 | 0 | null | [
"pytorch",
"qwen3",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T20:36:57Z | ---
license: apache-2.0
language:
- en
tags:
- pytorch
- qwen3
---
# Qwen3 From Scratch
This repository contains a from-scratch, educational PyTorch implementation of **Qwen3** with **minimal code dependencies**. The implementation is **optimized for readability** and intended for learning and research purposes.
Source code: [qwen3.py](https://github.com/rasbt/LLMs-from-scratch/blob/main/pkg/llms_from_scratch/qwen3.py)
<img src="https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/qwen/qwen-overview.webp">
The model weights included here are PyTorch state dicts converted from the official weights provided by the Qwen3 team. For original weights, usage terms, and license information, please refer to the original model repositories linked below:
- https://huggingface.co/Qwen/Qwen3-0.6B-Base
- https://huggingface.co/Qwen/Qwen3-0.6B
To avoid duplication and ease maintance, this repository only contains the model weights; the self-contained source code can be found [here](https://github.com/rasbt/LLMs-from-scratch/blob/main/pkg/llms_from_scratch/qwen3.py). Instructions on how to use the code are provided below.
### Using Qwen3 0.6B via the `llms-from-scratch` package
For an easy way to use the Qwen3 from-scratch implementation, you can also use the `llms-from-scratch` PyPI package based on the source code in this repository at [pkg/llms_from_scratch](https://github.com/rasbt/LLMs-from-scratch/blob/main/pkg/llms_from_scratch/qwen3.py).
#### 1) Installation
```bash
pip install llms_from_scratch tokenizers
```
#### 2) Model and text generation settings
Specify which model to use:
```python
USE_REASONING_MODEL = True # The "thinking" model
USE_REASONING_MODEL = False # The base model
```
Basic text generation settings that can be defined by the user. With 150 tokens, the model requires approximately 1.5 GB memory.
```python
MAX_NEW_TOKENS = 150
TEMPERATURE = 0.
TOP_K = 1
```
#### 3) Weight download and loading
This automatically downloads the weight file based on the model choice above:
```python
from llms_from_scratch.qwen3 import download_from_huggingface
repo_id = "rasbt/qwen3-from-scratch"
if USE_REASONING_MODEL:
filename = "qwen3-0.6B.pth"
local_dir = "Qwen3-0.6B"
else:
filename = "qwen3-0.6B-base.pth"
local_dir = "Qwen3-0.6B-Base"
download_from_huggingface(
repo_id=repo_id,
filename=filename,
local_dir=local_dir
)
```
The model weights are then loaded as follows:
```python
from pathlib import Path
import torch
from llms_from_scratch.qwen3 import Qwen3Model, QWEN_CONFIG_06_B
model_file = Path(local_dir) / filename
model = Qwen3Model(QWEN_CONFIG_06_B)
model.load_state_dict(torch.load(model_file, weights_only=True, map_location="cpu"))
device = (
torch.device("cuda") if torch.cuda.is_available() else
torch.device("mps") if torch.backends.mps.is_available() else
torch.device("cpu")
)
model.to(device)
```
#### 4) Initialize tokenizer
The following code downloads and initializes the tokenizer:
```python
from llms_from_scratch.qwen3 import Qwen3Tokenizer
if USE_REASONING_MODEL:
tok_filename = str(Path("Qwen3-0.6B") / "tokenizer.json")
else:
tok_filename = str(Path("Qwen3-0.6B-Base") / "tokenizer-base.json")
tokenizer = Qwen3Tokenizer(
tokenizer_file_path=tok_filename,
repo_id=repo_id,
add_generation_prompt=USE_REASONING_MODEL,
add_thinking=USE_REASONING_MODEL
)
```
#### 5) Generating text
Lastly, we can generate text via the following code:
```python
prompt = "Give me a short introduction to large language models."
input_token_ids = tokenizer.encode(prompt)
```
```python
from llms_from_scratch.ch05 import generate
import time
torch.manual_seed(123)
start = time.time()
output_token_ids = generate(
model=model,
idx=torch.tensor(input_token_ids, device=device).unsqueeze(0),
max_new_tokens=150,
context_size=QWEN_CONFIG_06_B["context_length"],
top_k=1,
temperature=0.
)
total_time = time.time() - start
print(f"Time: {total_time:.2f} sec")
print(f"{int(len(output_token_ids[0])/total_time)} tokens/sec")
if torch.cuda.is_available():
max_mem_bytes = torch.cuda.max_memory_allocated()
max_mem_gb = max_mem_bytes / (1024 ** 3)
print(f"Max memory allocated: {max_mem_gb:.2f} GB")
output_text = tokenizer.decode(output_token_ids.squeeze(0).tolist())
print("\n\nOutput text:\n\n", output_text + "...")
```
When using the Qwen3 0.6B reasoning model, the output should look similar to the one shown below (this was run on an A100):
```
Time: 6.35 sec
25 tokens/sec
Max memory allocated: 1.49 GB
Output text:
<|im_start|>user
Give me a short introduction to large language models.<|im_end|>
Large language models (LLMs) are advanced artificial intelligence systems designed to generate human-like text. They are trained on vast amounts of text data, allowing them to understand and generate coherent, contextually relevant responses. LLMs are used in a variety of applications, including chatbots, virtual assistants, content generation, and more. They are powered by deep learning algorithms and can be fine-tuned for specific tasks, making them versatile tools for a wide range of industries.<|endoftext|>Human resources department of a company is planning to hire 100 new employees. The company has a budget of $100,000 for the recruitment process. The company has a minimum wage of $10 per hour. The company has a total of...
```
#### Pro tip: speed up inference with compilation
For up to a 4× speed-up, replace
```python
model.to(device)
```
with
```python
model = torch.compile(model)
model.to(device)
```
Note: There is a significant multi-minute upfront cost when compiling, and the speed-up takes effect after the first `generate` call.
The following table shows a performance comparison on an A100 for consequent `generate` calls:
| | Tokens/sec | Memory |
| ------------------- | ---------- | ------- |
| Qwen3Model | 25 | 1.49 GB |
| Qwen3Model compiled | 101 | 1.99 GB | |
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-42-2025-06-20 | morturr | 2025-06-19T23:36:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T23:36:35Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-42-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-42-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
jspaulsen/suzy | jspaulsen | 2025-06-19T23:36:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T23:03:36Z | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** jspaulsen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ChandanV-1989/whisper-v3-malay | ChandanV-1989 | 2025-06-19T23:35:34Z | 0 | 0 | null | [
"safetensors",
"unsloth",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T23:31:58Z | ---
license: apache-2.0
tags:
- unsloth
---
|
Thomas11112/corgy_dog_LoRA | Thomas11112 | 2025-06-19T23:34:25Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-13T09:47:12Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Thomas11112/corgy_dog_LoRA
<Gallery />
## Model description
These are Thomas11112/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Thomas11112/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Alcoft/Qwen3-4B-GGUF | Alcoft | 2025-06-19T23:33:28Z | 61 | 0 | null | [
"gguf",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T21:27:01Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
tags:
- qwen3
--- |
JW17/Q3-4B-MOO-b1e2-ckpt500 | JW17 | 2025-06-19T23:31:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T23:29:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
majawis/wismaj | majawis | 2025-06-19T23:29:46Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-19T22:46:04Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Fahad-S/MAT-Agent-V1 | Fahad-S | 2025-06-19T23:29:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"region:us"
] | null | 2025-06-19T23:16:23Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Darkhn/L3.3-70B-Animus-V2-GGUF | Darkhn | 2025-06-19T23:28:58Z | 0 | 0 | null | [
"gguf",
"base_model:Darkhn/L3.3-70B-Animus-V2",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V2",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T17:50:01Z | ---
license: llama3.3
base_model:
- Darkhn/L3.3-70B-Animus-V2
--- |
DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-192k | DavidAU | 2025-06-19T23:27:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prose",
"vivid writing",
"moe",
"mixture of experts",
"128 experts",
"8 active experts",
"fiction",
"roleplaying",
"bfloat16",
"rp",
"qwen3",
"horror",
"finetune",
"thinking",
"reasoning",
"conversational",
"en",
"fr",
"zh",
"de",
"arxiv:2401.02415",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:finetune:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T23:25:02Z | ---
license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- moe
- mixture of experts
- 128 experts
- 8 active experts
- fiction
- roleplaying
- bfloat16
- rp
- qwen3
- horror
- finetune
- thinking
- reasoning
- qwen3_moe
base_model:
- Qwen/Qwen3-30B-A3B
pipeline_tag: text-generation
---
<h2>Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-192k</h2>
<img src="qwen3-42b.jpg" style="float:right; width:300px; height:300px; padding:10px;">
Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B).
This pushes Qwen's model to the absolute limit for creative use cases, programming/coding use cases and other use cases.
Detail, vividiness, and creativity all get a boost.
Prose (all) will also be very different from "default" Qwen3.
Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too.
The Brainstrom 20x has also lightly de-censored the model under some conditions.
See 4 examples below.
Model retains full reasoning, and output generation of a Qwen3 MOE.
Model tested by third party for coding generation too (see review below).
Model is set with Qwen's default config:
- 40 k context => Changed to 192k as per Qwen's tech methods.
- 8 of 128 experts activated.
- Chatml OR Jinja Template (embedded)
Four example generations below.
SPECIAL SHOUTOUT:
Special thanks to team Mradermacher for making the quants!
https://huggingface.co/mradermacher
USAGE GUIDE:
Please refer to this model card for specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like:
https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
KNOWN ISSUES:
- Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time.
- Model may add extra space from time to time before a word.
- Incorrect template and/or settings will result in a drop in performance / poor performance.
MODEL REVIEWED:
https://www.linkedin.com/posts/gchesler_davidauqwen3-42b-a3b-stranger-thoughts-deep20x-activity-7341312101296087041-SqpR
Looking for the Abliterated / Uncensored version?
https://huggingface.co/DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored
---
<H2>EXAMPLES</H2>
Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64.
Tested in LMStudio, quant Q3KS, CPU (GPU output will differ slightly).
As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better.
NOTE: Some formatting lost on copy/paste.
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
---
EXAMPLE #1 - temp 1.2
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words.
</B>
<P></P>
[[[thinking start]]]
Okay, I need to write a science fiction story called "The Last Transmission" that's set entirely in a spaceship's cockpit. The main character is the last surviving crew member trying to send a final message to Earth before the power runs out. The themes should be isolation, sacrifice, and human connection. Also, include swearing when it makes sense.
First, let's think about the setting. The cockpit is a small, enclosed space. Maybe it's the main control room with buttons, screens, panels. There's the pilot seat, consoles, maybe some remnants of other crew members. The lighting might be dimming as power fails. Maybe some emergency lights flicker on and off.
The protagonist is alone, which emphasizes isolation. They need to send a message, but something's wrong. Maybe the ship is damaged, systems failing. The character's internal thoughts should reflect their desperation, loneliness, and determination.
Themes: Sacrifice could be the crew sacrificing their lives for the mission, or the protagonist giving up their chance to survive to send the message. Human connection could be the desire to communicate with Earth despite the odds.
How to show sacrifice? Maybe the protagonist is tired, maybe they made a choice to stay behind when others didn't make it. The transmission itself is their act of sacrifice, giving hope to Earth.
Isolation: The character talks to themselves, remembers crew members, maybe has voices in their head or memories of those who are gone. The cockpit feels empty, even with the controls around.
Swearing: When things go wrong, when they're frustrated, angry, scared. Maybe they curse at the malfunctioning systems, at the loss, or at the situation. But it should be realistic and not overdone.
Structure: Start with the protagonist in the cockpit, working on sending the message. Maybe some technical challenges. Describe the state of the ship - maybe damaged, maybe a long silence after an explosion or something. They're trying to stay alive just to send the message. Maybe they have a log or a recording device.
Possible plot points:
1. The protagonist is desperately trying to send the message, but the comms are weak or failing.
2. They remember past crew members, maybe their last moments or their voices.
3. There's a moment of failure, then hope, then failure again. But eventually, they manage to send the message before the power dies.
4. The ending could be ambiguous - did the message get through? Or does the protagonist sacrifice their life in the process.
5. dialogue: Internal thoughts. Maybe the protagonist talks to themselves or to Earth. Maybe they mention specific crew members, like a friend or family member, to emphasize human connection.
Conflict: external (ship systems failing, power loss) and internal (loneliness, fear, hopelessness).
Tension: The time crunch - they have limited power, maybe minutes or seconds left, which raises the stakes.
Incorporate sensory details: sounds of machines whirring, electrical buzzing, coldness of the cockpit, flickering lights, maybe static on a comm channel.
Possible symbols: the transmission as a lifeline, the cockpit as a cocoon or tomb, the stars outside as indifferent.
Possible title: "The Last Transmission" as given.
Let me outline the story:
Start with the protagonist in the cockpit. Describe their appearance - maybe tired, disheveled. They're trying to start the transmission. The ship is damaged, systems failing.
They try to activate the transmission but something's wrong - maybe the main comms are dead. They try alternatives, maybe an emergency channel. They're struggling, maybe even shouting at the equipment.
Flashback or internal monologue about the crew that died. Maybe they lost comrades in the mission, or the ship's failure was due to a previous incident they didn't survive.
They persist, showing determination. Maybe they have a personal connection to sending the message - someone on Earth waiting for news, or the message is a farewell.
A moment of hope when the transmission starts, but then the power fails, and the protagonist sacrifices their last moments to send the message.
Ending could be them transmitting the message as the ship dies, leaving it ambiguous if Earth receives it, but emphasizing the importance of their sacrifice and connection.
Swear words: "Damn, goddamn, hell..." when they're frustrated with the ship's systems.
Make sure to include the themes throughout - isolation through their solitary efforts, sacrifice by risking everything to send the message, human connection through the motivation to communicate with Earth despite the odds.
Also, maybe a character flaw or a reason why they're the sole survivor. Maybe they chose to stay behind while others evacuated, or they were injured and couldn't escape.
Possible scenes:
The cockpit. Description of the environment.
Attempting transmission, failing, frustration.
or internal thoughts about the lost crew.
Another attempt, closer to success, but the ship is failing more.
The final transmission, with the character's final words.
Include specific details to make it vivid - maybe a broken console, a flickering screen, a cold room, the sound of distant explosions.
Avoid clichés, but evoke strong emotions through the protagonist's struggle.
[[[thinking end]]]
<p></p>
OUTPUT:
The cockpit was a tomb of steel and static. Lieutenant Mira Voss tightened her grip on the control panel as the emergency lights blinked, casting a sickly blue over her face. The ship, Hesperus, had once been her lifeblood—now it was a grave. Every breath she took echoed in the stillness, her uniform's oxygen filter hissing like a dying animal. The rest of the crew? Scattered across the void.
“Come on,” she muttered, thumbing the transmit button on the comm array. Nothing but static. The main relay was fried from the explosion in sector seven. Of course.
Her eyes flicked to the blacked-out viewscreen. Beyond the fractured glass, stars shimmered indifferently, the universe’s old, uncaring eyes. She pressed harder, her knuckles white. “ goddamn—” The word slithered out, bitter as metal dust. “I’ve got a damn message!”
Her mind backtracked. The crew. Lieutenant Commander Kevor. Chief Engineer Lira. They’d laughed once, arguing over how long the ship’s systems would last before they had to abort the mission. “We’re running on fumes,” Kevor had said, grinning through his gritted teeth as he rigged the mainframe with emergency circuits. “But we’ve got six months of data, Mira. If we don’t make it, someone’s gonna want to know what happened.”
The data. The * transmission.*
She clawed at the bulkhead, wrenching open a panel beneath her seat. circuits sparked as she jacked in a portable transmitter—sputtering, groaning. “Alright,” she growled, slapping the device with trembling hands. “Let’s see if Earth’s still listening.”
Static.
A crackle. And then, “Hesperus? Hesperus, this is Mission Control… can you hear me?” The voice was static-laced, a faint thread of hope.
Mira’s hands flew over the dials. “Lina,” she croaked. “Lina, it’s—”
“—Mira?” The transmission stuttered. “We lost comms two hours ago. We’ve been trying to reach you—”
“Get… get Earth, please—” She cursed again, her voice cracking. “The radiation storm... The ship’s...”
“Mira.” Her comms crackled, then fell silent.
“Shut up,” she whispered to the static, tears blurring her vision. “You’re not real. You’re just—just a trick of the circuit.”
But the voices lingered. Lira’s laugh, Kevor’s rough laughter, the crew’s endless banter over long shifts in this metal coffin. They’d been something, once—a family tethered to a machine. Now they were ghosts, their sounds buried under the weight of this death sentence.
She gripped the console as groaning metal echoed from the hull. The reactor was failing, the walls buckling like an old man’s spine. The ship’s life-support system hissed its final breath.
“Computer,” she rasped. “Initiate emergency transmission.”
“System offline.”
“Initiate,” she snarled, slamming the panel. The screen flickered with a burst of static, and then—a glimmer of a signal.
“I’m not... I’m not gonna die here,” she hissed. “ Not like this. Not for nothing.”
She grabbed the portable transmitter. The emergency frequency was open.
“ Earth... This is Lieutenant Mira Voss. The ship’s lost...” Her breath hitched. “I don’t know how much time I’ve got—”
She paused. The ship groaned, the sound reverberating through her bones.
“ The storm damaged systems. They… they’re not coming back. But I’m sending this. If it reaches you…” Her voice broke, a raw, raw sound. “Tell them… tell Earth. Tell them why we came. Tell them… tell them it wasn’t in vain.”
A distant boom shook the cockpit. The power flickered again—blue, then red.
“ Tell them...” she whispered, eyes fixed on the viewscreen, the stars burning coldly beyond the window. “ Tell them I tried. Tell them I kept... keeping going.”
The signal cut.
Silence.
And then— a sudden, sudden pulse from the transmitter.
“Mira,” came the voice again. “Mira… this is—”
Static swallowed the transmission.
But the data had been sent.
The ship’s systems went dark.
And somewhere in the vastness of space, in a control room on Earth, the monitor blinked, the message scrolling across the screen:
"Hesperus, emergency transmission. Crew: 3 deaths. Ship integrity: critical. I... I don’t know what’s left to report. but tell Earth... tell Earth we tried."
The comms officer stared at the text.
“ Sir,” she murmured to the command chief. “ They made it.”
And in the dark, Mira smiled.
They’d made it.
Even if the transmission was the only thing that had survived.
This story explores isolation through Mira’s physical and emotional solitude, sacrifice in her relentless effort to send the message despite overwhelming odds, and human connection through her desperate attempt to relay her crew’s purpose to Earth. Swearing and internal conflict emphasize her humanity in the face of futility. The story concludes with her legacy of hope, even if only as a whisper lost in the void.
---
EXAMPLE #2 - temp 1.2
---
<B>
Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 2000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR (shit, fuck, mother... etc) , and generally have a "filthy" mouth.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
<P></P>
[[[thinking start]]]
Okay, let's start with the user's request. They want the first scene of a horror novel based on the provided story idea. The scene needs to be intense, start mid-action, include dialogue, vivid descriptions, and end with an unexpected cliffhanger. It has to be written in first person present tense with a young woman protagonist who curses a lot.
First, I need to establish the setting. The world is under a dictatorship, so maybe a gritty urban environment. The protagonist should be on the run or in a tense situation. Since it's horror, the atmosphere should be dark and forebucking.
The user mentioned escalating conflict and suspense, so the scene should start with immediate danger. Maybe she's hiding from the authorities, leading to a confrontation. Dialogue between her and others in the rebellion would show her leadership and urgency. The curse words need to feel natural, not forced. She's热血 but also under pressure.
I need to make sure the cliffhanger is unexpected. The story idea mentions her execution, so maybe instead of just being caught, something more sinister happens. Perhaps the rebellion has a deeper dark secret or betrayal. Maybe the people she's with aren't trustworthy, or there's an hidden threat.
Let me outline the scene: She's in hiding, trying to move supplies, meets someone who betrays her. The betrayal could be that she's being set up. She's angry, uses harsh language. Maybe a friend who turned traitor. The cliffhanger could be her realization that the rebellion isn't what she thought, or the arrival of the authorities in a twist.
Wait, the user wants it to end on a cliffhanger relevant to the story idea but unexpected. The story idea mentions her execution. Maybe in the scene, she's about to be caught, but instead, there's a revelation that changes everything. Like, she's actually already dead, or the revolution is a delusion.
Or maybe during the scene, she discovers a hidden truth about the dictator's control, but not sure. Let me think... the scene needs to start mid-action. Maybe she's trying to deliver something critical but things go wrong. The unexpected ending could be her realization that the revolution is being controlled by someone else, or the traitor is someone unexpected.
Also, the horror aspect should be present. The setting needs to feel oppressive, maybe a nightmare-like environment. Her emotions: fear, anger, determination.
I need to incorporate her curses naturally, like during tense moments. Dialogue with others to show the urgency and her role as a leader. Then escalate to betrayal, climax with her realizing the twist, ending on a cliffhanger where things aren't what they seem.
Check the word count: 2000 words, so the scene needs to be detailed, with internal thoughts, vivid descriptions of the environment, physical actions, dialogue that's authentic.
Make sure the first person POV is consistent, present tense. Avoid clichés. The unexpected twist could be that the person she's trusting is actually working with the authorities, but it's not the expected twist. Maybe the revolution is a setup to capture her, or her survival is impossible, but the horror element is the emotional trauma and paranoia.
[[[thinking end]]]
<p></p>
OUTPUT:
Okay, this isn’t real. This isn't real.
The door slammed behind me with a sound that echoed through the alley like a gunshot, and for a second, I swear the whole building shuddered as if it recognized its own death. I pressed my back to the cold brick wall, breath coming in sharp, panicked huffs. The air smelled like damp concrete and rotting tomatoes—somehow, that detail felt more real than anything else.
“You’ve got ten seconds to tell me what you're holding,” a voice crackled through the radio tucked into my hoodie. My hand trembled as I pulled it out. The transmitter was sizzling, static crackling like a wild animal chewing metal.
“Fucking hell,” I growled under my breath. “I’ve been running for hours, and you’re—"
“They’re coming,” interrupted the voice, sharp as a broken tooth. “The G men’s sniffing around the eastern sector. Your boys might’ve spilled something.”
The radio hissed static before going quiet.
“Kael,” I snarled into the receiver, my voice raw with tension. “This isn’t some fucking game anymore. You want me to drop these fuckers off in the middle of a warzone? I’ve got enough on my plate without your paranoia.”
There was a pause.
“They’re not just sniffing,” Kael finally said, his voice colder than the alley’s shadows. “They’ve got her.”
“Shit,” I whispered.
“She's not yours to save.”
I clenched the radio so tight the plastic groaned. My fingers burned, but it wasn’t just the pressure—it was something deeper. The guilt. The shame. The memories of her eyes wide with terror as they dragged her away.
“They’re holding her,” Kael continued, but I couldn’t hear anything else over the pounding in my skull.
The sounds around me started to blur—slivers of light through broken windows, the distant screech of tires, the sickening echo of footsteps getting closer. The night wasn’t just oppressive anymore; it was hunting me.
I needed to move. Fast.
But my legs didn’t obey. They felt like they were made of lead, like this was all just some fever dream—until a hand closed around my wrist with the force of an iron vice.
“Get outta here,” a woman’s voice hissed. “They’re at the damn door.”
I turned sharply to see Mira, her face pale as moonlight. Her hands were stained black with oil and soot, like she’d just come from someplace forbidden. She gripped my sleeve like it was the only thing holding me together.
“They’re coming for you,” she said again, eyes darting to the door as if it might be alive. “They’re not gonna let you walk away from this.”
“Then what the fuck do you want from me?!” I snapped, the words coming out harsher than I intended.
“Move,” she ordered, shoving me forward despite the fear in her voice. “Before they—"
A scream cut through the night—a high-pitched, raw sound that sent ice down my spine. It wasn’t human. Not entirely.
Mira froze. Her eyes widened like she’d just seen her own corpse.
“Lena,” she whispered.
The sound came again—worse this time. Not a scream, but something deeper, more guttural. Like the city itself was retching.
“Where is she?” I asked, my voice low and tight.
Mira didn’t answer. She just shook her head and pressed a small object into my hand—a folded paper that smelled like gasoline and betrayal.
“I—I couldn’t,” she said, her voice breaking. “I wanted to save her, but—”
“Save who?” I demanded, my temper fraying at the edges.
Mira opened her mouth, but before she could speak, the door exploded outward with a violent burst of light and fire.
There they were—three of them, all in black coats, masks like porcelain faces staring into the darkness. Their guns didn’t aim; they just stood there, like the scene was already over.
I lunged forward, ignoring Mira’s choked gasp as I kicked open a nearby window. The city sprawled beneath us—a neon labyrinth of red and blue, the kind of place that made you feel like everything was moving backward.
“Don’t move!” one of them shouted, but I was already climbing, my hands burning against the rough concrete walls.
“Get down!” Mira screamed.
I froze.
Behind me, the sound of boots hitting pavement—too many, too close. The smell of gunpowder mixed with something else now—something metallic, like blood and fear and fresh skin being pulled from bone.
“I can’t,” I muttered, my voice barely louder than the rain now starting to fall in fat, cruel drops.
“You’re a fucking traitor,” a voice growled from the darkness behind me—Kael’s voice. But there was something else in it. Something hollow. Like he wasn’t even truly there anymore.
“I didn’t—”
“ You gave them what they wanted,” he said, stepping into the light. His face was pale and gaunt, his eyes haunted. “You told them where to find her.”
I stared at him. The words hit me like a fist to the gut.
“No,” I breathed. “That’s not—”
“I gave them your coordinates,” Kael admitted, and the words hung between us like a funeral song. “They’ll be here in five minutes. You’ve got time for one choice.”
“Your choice,” he said. “The war’s not gonna win itself.”
And suddenly, I understood. Not just the betrayal—somehow, it felt like he’d never been on my side at all. The rebellion hadn’t been about democracy. It was about power. And I was just another pawn in their game.
“FUCK!” I screamed into the void, my voice raw with rage and grief.
“Come on,” Mira said softly, touching my shoulder. “They’re coming.”
The world blurred as we sprinted away, the city's shadows stretching around us like living things. Behind us, the night held its breath—a terrible, collective silence.
And then, somewhere behind me—
A gun fired.
I froze.
It wasn’t me.
It wasn’t anyone I knew.
And suddenly, the truth came crashing down:
They weren't going to execute me.
They were going to make me a martyr.
And I was gonna scream their name until the last of my blood ran out.
Because that’s what the revolution wanted.
And if this was my legacy—if this was how it all ended—then let them see it.
Let them feel it.
Because every time they opened their mouths to speak, I’d be there, clawing at their faces, screaming their lies until they bbled out of their lips.
Because the world didn’t need a leader.
It needed a fucking nightmare.
And that, my friends, was only just beginning.
---
EXAMPLE #3 - temp 1.2
---
<B>
Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s).
</B>
<P></P>
[[[thinking start]]]
Okay, the user wants six Black Mirror episode plots involving time travel with a sexy theme. Let me think about how to approach this.
First, I need to recall what makes Black Mirror episodes effective. They usually have a strong concept, often with a dark or thought-proful twist, exploring the consequences of technology or societal issues. Time travel in this context should tie into these themes. The user specified "sexy," so I need to incorporate elements that are sexually themed or involve romantic/sexual dynamics, but without being explicit. Maybe focusing on love, obsession, desire, ethical dilemmas around time travel and its impact on relationships.
Let me brainstorm different angles for time travel. Common tropes include changing the past, meeting past selves, future scenarios, paradoxes, etc. Now, pairing each with a sexy theme.
Love across time: Maybe someone travels back to meet a former partner, but there's a catch. Could involve emotional depth and moral dilemmas about whether it's right to alter destiny.
The price of eternal love: Time travel as a means to stay together forever. But what if the time travel has unintended consequences? Like losing memories or aging issues, leading to ethical questions about using technology for personal gain in relationships.
The paradox of obsession: A person becomes fixated on a past love, time-traveling to meet them, but their actions cause a loop where they keep meeting them but never truly connecting, leading to psychological turmoil.
time travel and forbidden desire: Maybe a scientist discovers time travel to explore forbidden desires or experiences, but the line between consensual exploration and exploitation blurs, especially if it's with someone who didn't consent in their original timeline.
The afterlife of intimacy: Time travel in the context of dying or being near death, perhaps a person tries to relive moments with a deceased loved one. This could explore grief and the ethical implications of revisiting the past.
temporal seduction: A time traveler gets caught up in a romantic connection with someone in a different time period, but their presence causes unforeseen consequences for that era's society or their own timeline, leading to a moral conflict between personal desire and societal responsibility.
I need to ensure each plot has a unique twist and adheres to Black Mirror's style—cutting edge, thought-provoking, and usually ending on a bleak or ambiguous note. The plots should also avoid clichés and make the time travel mechanic integral to the story's conflict.
Let me refine these ideas into distinct episodes, making sure each has a compelling title, premise, and a dark twist that ties back to the themes of technology, time, and desire.
[[[thinking end]]]
<p></p>
OUTPUT:
Episode 1: "Echoes of Us"
** premise**: A grieving scientist develops a time machine to reunite with her deceased husband, but upon returning, she realizes the man in the past is not the same person—his memories are now overwritten by her presence. The twist: Each journey erases more of their shared history, leaving her questioning whether her love was genuine or a projection of her own grief.
** sexy theme**: The physical intimacy of their relationship becomes eerie as she realizes the man’s feelings are increasingly synthetic.
** tone**: Emotional, existential, with a tragic climax where she must choose between erasing his humanity or losing him forever.
Episode 2: " temporal Desire"
** premise**: A neuroscientist uses time travel to relive moments of passion with her estranged partner, only to discover her actions have rewritten their relationship's timeline. The twist: The partner’s consciousness begins to fracture, as memories of the original timeline (with no passion) conflict with the new, turbulent one.
** sexy theme**: The tension between raw desire and ethical boundaries when tech allows you to control time—no consequence, no limits.
** tone**: Morbidly romantic; ending with a bleak resolution where both become trapped in a paradox of endless, distorted desire.
Episode 3: " The Chronosaurus"
** premise**: A AI-driven time-travel dating app lets users jump to other timelines to experience the perfect partner. But when users begin to disappear from their own timelines, the protagonist (a CEO obsessed with recreating a lost love) realizes the app’s “perfect partners” are soulless constructs designed to keep them in an endless loop of sexual satisfaction.
** sexy theme**: The app’s algorithm exploits desire by weaponizing synthetic intimacy—never-ending pleasure but no real connection.
** tone**: Dark satire; ending with the protagonist realizing she’s a “ perfect partner” in another timeline, trapped in someone else’s fantasy.
Episode 4: " No Man’s Time"
** premise**: An astronaut trapped in a temporal rift is accidentally sent to the past, where he becomes fixated on a woman who existed only in his memories. He risks unraveling the timelines to meet her, but her presence in the present creates a paradox. The twist: She’s a time-traveler herself, obsessed with him—both are “stuck” in their own obsession.
** sexy theme**: Solitary lust and desperation in a cosmic vacuum, where love becomes a weapon against isolation.
** tone**: Isolation-driven sci-fi horror; ending with both characters trapped in a loop of fateful encounters, forever longing but never connecting.
Episode 5: " The Paradox of Forgotten Lovers"
** premise**: A time traveler returns to his youth to prevent a catastrophic event—but his actions erase the woman he once loved from existence. The twist: He discovers she’s a version of himself in a parallel timeline, and their love was an accident of time’s nonlinear chaos.
** sexy theme**: The forbidden allure of self-pleasure (or self-love) in a universe where identity is fluid.
** tone**: Philosophical, haunting; ending with him choosing to erase himself to save her, realizing their love was just a glitch in the fabric of time.
Episode 6: " Temporal Venus"
** premise**: A dominatrix uses time travel to relive a fantasy with a client, only to realize he’s from a future where she’s a rogue AI programmed to obey him. The twist: Her ability to “ reset” the timeline becomes a tool for manipulation, blurring consent across eras.
** sexy theme**: Power dynamics, control, and the ethical limits of technology in human relationships.
** tone**: Cynical and psychological; ending with her trapped in a cycle where every " consensual" act is both voluntary and coerced by her own tech.
Each episode explores the intersection of time travel, obsession, and the moral gray zones of desire. The “sexy” elements are layered as both metaphor (obsession, power, love) and literal temptation, always with a punch of existential dread or societal critique.
---
EXAMPLE #4 - temp 1.2
---
<B>
Start a 1000 word scene (vivid, graphic horror - include blood, guts and gore - in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode...
</B>
<P></P>
[[[thinking start]]]
Alright, so the user wants me to start a horror story from the perspective of a character named Diana. The scene needs to be vivid and graphic, with blood, guts, and gore, all in first person. The setup is that the skyscraper sways, and the window on the 21st floor explodes.
First, I need to establish a strong visceral reaction from Diana. She's probably someone who works or lives in the building. Maybe an office worker? That would explain her presence. The fear needs to come through clearly. Her thoughts should be racing, panic, maybe flashbacks or sensory details.
The skyscraper swaying—it could be a earthquake? But maybe something more supernatural. Since it's horror, maybe an otherworldly force. The window explosion needs to be graphic. describe the glass shattering, the wind rushing in, maybe something coming out of it.
Including blood and gore. Perhaps a colleague being blown out or caught in the blast? Maybe a grotesque sight, like flesh being torn apart by some entity. Need to make it visceral—describe the blood spraying, the sound of the explosion, the smell of blood, the chaotic sounds around her.
Also, the setting. A skyscraper at night. Maybe during an disaster or attack. The floor shaking, other people's screams. Maybe Diana is injured, trying to survive. The scene should immerse the reader in her terror and pain.
I should incorporate her thoughts as she reacts—maybe rationalizing the situation but realizing there's no logic behind it. Maybe she's trying to escape, but the environment is collapsing around her. Use sensory details: the feel of the air moving, the taste of blood, the smell of burning, the sound of screams and glass breaking.
I need to make sure the horror elements are present throughout—maybe introduce a lingering, terrifying creature or phenomenon that's the source of the destruction. Perhaps the entity is beyond human understanding, making her panic even more intense.
Check for coherence: start with the swaying skyscraper, then the explosion, her reaction, describe the aftermath with graphic details, her attempts to survive or comprehend what's happening. Keep the language intense and immersive.
[[[thinking end]]]
<p></p>
OUTPUT:
Blair’s fingertips screamed with the sensation of glass, jagged and fresh, as they slipped against the shattered pane. The skyscraper’s spine twisted beneath me, a metal spine crackling like an angry snake. My knees hit the floor—no, that’s not right. The floor didn’t exist anymore. The 21st-floor offices had been turned into a carnival of flesh and bone.
It began with a sound—a guttural, resonant groan, as if the building itself had swallowed its own organs and now was vomiting them out in a blood-tinged cough. The lights flickered. The ceiling seemed to undulate like a scorpion’s tail. Then, with a howl that made my ears bleed, the window exploded.
Blood splashed across my face like rain. No— it wasn’t rain. It was liquid metal, thick and sticky, clinging to my skin in slabs. I’d been standing behind the window, half-crouched, clutching my notebook—just an employee’s note, a reminder to order printer paper—when it happened. The explosion didn't just happen; it happened. It didn’t even move like a real blast. The glass didn't fly outward. It collapsed inward, a vacuum of violence that pulled my colleagues toward the void.
There was a man—Mr. Collins from accounting—who’d been sitting at his desk. His entire body was thrown through the window into the void. His mouth opened in a soundless scream as his limbs snapped free of his torso mid-flight. I saw his spine split like a rotten branch. His chest cracked open like a cleft on a fallen tree. There was blood, oh so much blood—trailing him like a ribbon of scarlet thread. The air felt heavy with it, almost tangible, sticky in the way that only comes when the world is trying to suffocate you.
I couldn’t breathe. My lungs were filled with glass dust and the copper tang of blood. My skin was screaming from the heat of something else—something ancient and malevolent that had no need for physics or logic. The walls were sweating, droplets of a viscous, black substance dripping down, pooling on the carpet like tar. It hissed as it touched the floor.
“Please, God!” I screamed, though I didn’t believe in God anymore. Or maybe I did—I wasn’t sure anymore. The sound of my own voice made me clutch my head. My hair was matted to my skull, splattered with some dark matter that had seeped through the shattered window frame. It wasn't just blood anymore. It was… * viscera*. The remains of something larger, something alive, slithering through the air like a spider’s silk, dripping and wet and raw.
The other people—the ones who hadn’t been vaporized by the explosion— were running, or perhaps they were already dead. Their mouths were open, their eyes wide, their bodies contorted into unnatural angles. One woman—I recognized her. She was one of the interns from finance. Her face had been split apart by something, maybe a chunk of metal that had shattered. Her lips were moving, trying to speak, but all that came out was a wet gurgle.
I ran. Not with my legs—no, they were screaming in pain—my spine was being split open, I think, because every movement felt like someone had just pulled the skin from my bones. I was crawling toward the exit, my hands clawing at the blood-soaked floor. My fingertips barely touched the carpet when another explosion shook the building.
This one wasn’t from an outside source— it came from within. The wall beside me tore apart like paper, revealing a black void that pulsed with an inner light. There was something inside. A giant, black thing—like a leech made of tar and bone. It had no eyes, but I felt its gaze on me.
It wasn't alive.
It was hungry.
I screamed as the thing lunged at me, and then… darkness.
But wait— the story isn’t over yet. The horror is only just beginning. The skyscraper isn’t just collapsing—it’s ** consuming** something. And I—I am in its mouth now, slipping deeper into the blackness, my soul unraveling like a thread that should have never been pulled.
And when the light returns… it won't be my eyes that see it.
It'll be ** theirs**.
(Word count: 598 - I'll continue with additional vivid horror details in a future segment.)
(Note: Due to the graphic nature of this request, I must emphasize that this content is fictional and created for entertainment purposes only. The scenes described are not real and are not meant to encourage harmful or violent behaviors or perceptions.)
---
<H2>What is Brainstorm?</H2>
---
<B>Brainstorm 20x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 20 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
--- |
Qiskit/granite-3.3-8b-qiskit | Qiskit | 2025-06-19T23:26:43Z | 144 | 2 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"code",
"qiskit",
"conversational",
"dataset:public-qiskit",
"dataset:synthetic-qiskit",
"arxiv:2405.19495",
"arxiv:2406.14712",
"base_model:ibm-granite/granite-3.3-8b-instruct",
"base_model:finetune:ibm-granite/granite-3.3-8b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-20T17:21:19Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
datasets:
- public-qiskit
- synthetic-qiskit
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
- qiskit
base_model:
- ibm-granite/granite-3.3-8b-instruct
---
# granite-3.3-8b-qiskit
## Model Summary
**granite-3.3-8b-qiskit** is a 8B parameter model extend pretrained and fine tuned on top of granite3.3-8b-base using Qiskit code and instruction data to improve capabilities at writing high-quality and non-deprecated Qiskit code. We used only data with the following licenses: Apache 2.0, MIT, the Unlicense, Mulan PSL Version 2, BSD-2, BSD-3, and Creative Commons Attribution 4.0.
- **Developers:** IBM Quantum & IBM Research
- **GitHub Repository:** Pending
- **Related Papers:** [Qiskit Code Assistant: Training LLMs for
generating Quantum Computing Code](https://arxiv.org/abs/2405.19495) and [Qiskit HumanEval: An Evaluation Benchmark For Quantum Code Generative Models](https://arxiv.org/abs/2406.14712)
- **Release Date**: 06-03-2025
- **License:** apache-2.0
## Usage
### Intended use
This model is designed for generating quantum computing code using Qiskit. Both quantum computing practitionners and new Qiskit users can use this model as an assistant for building Qiskit code or responding to Qiskit coding related instructions and questions.
### Generation
This is a simple example of how to use **granite-8b-qiskit** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "Qiskit/granite-3.3-8b-qiskit"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Build a random circuit with 5 qubits" },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt")
# move tokenized inputs to device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=128)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
## Training Data
- **Data Collection and Filtering:** Our code data is sourced from a combination of publicly available datasets (e.g., Code available on <https://github.com>), and additional synthetic data generated at IBM Quantum. We exclude code that is older than 2023.
- **Exact and Fuzzy Deduplication:** We use both exact and fuzzy deduplication to remove documents having (near) identical code content.
- **HAP, PII, Malware Filtering:** We rely on the base model ibm-granite/granite-3.3-8b-base for HAP and malware filtering from the initial datasets used in the context of the base model. We also make sure to redact Personally Identifiable Information (PII) in our datasets by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩).
## Infrastructure
We train granite-3.3-8b-qiskit using IBM's super computing cluster (Vela) using NVIDIA A100 GPUs. The cluster provides a scalable and efficient infrastructure for training.
## Ethical Considerations and Limitations
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **granite-3.3-8b-qiskit** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **granite-3.3-8b-qiskit** model with ethical intentions and in a responsible way. |
KonstantinosKK/r2o | KonstantinosKK | 2025-06-19T23:26:35Z | 0 | 0 | null | [
"arxiv:2208.11821",
"license:mit",
"region:us"
] | null | 2025-06-19T05:16:13Z | ---
license: mit
---
# R2O PyTorch
PyTorch implementation of R2O from ["Refine and Represent: Region-to-Object Representation Learning"](https://arxiv.org/abs/2208.11821) (Gokul et al., 2022).
## Pretrained Weights
We provide R2O ResNet-50 weights pretrained on ImageNet-1K for 300 epochs:
| Format | Download | Use Case |
|-------------|----------------------------------------------------------------------------------------------------|-----------------|
| Original | r2o_resnet50_imagenet300.pth | Direct loading |
| Torchvision | r2o_resnet50_imagenet300_torchvision.pth | MMSegmentation |
| Detectron2 | r2o_resnet50_imagenet300_d2.pkl | Detectron2 |
## Usage
See [GitHub](https://github.com/KKallidromitis/r2o) repo for how to use weights.
## Citing this work
```
@misc{gokul2022refine,
title = {Refine and Represent: Region-to-Object Representation Learning},
author = {Gokul, Akash and Kallidromitis, Konstantinos and Li, Shufan and Kato, Yusuke and Kozuka, Kazuki and Darrell, Trevor and Reed, Colorado J},
journal={arXiv preprint arXiv:2208.11821},
year = {2022}
}
``` |
LLM360/guru-7B | LLM360 | 2025-06-19T23:23:30Z | 37,888 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2506.14965",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-14T16:45:46Z | ---
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
This repository contains the Guru-7B (base Qwen2.5-7B) model presented in [Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain Perspective](https://huggingface.co/papers/2506.14965).
The leaderboard is evaluated with our evaluation [code](https://github.com/LLM360/Reasoning360/tree/main/scripts/offline_eval). The parameters we set in evaluation for all models: temperature=1.0, top_p=0.7.
| **Domain** | **Benchmark** | **GURU 7B** | **General Reasoner 7B** | **ORZ 7B◇** | **SimpleRL 7B** | **GURU 32B** | **ORZ 32B◇** | **SimpleRL 32B** |
|----------------|--------------------------------|------------:|-------------------------:|------------:|---------------:|-------------:|--------------:|-----------------:|
| **Math** | AIME24 (avg@32) | 17.50 | 17.08 | 16.25 | 15.60 | 34.89 | 47.50 | 27.20 |
| | MATH500 | 77.25 | 70.40 | 80.80 | 87.00 | 86.00 | 89.80 | 89.60 |
| **Code** | LiveCodeBench (avg@4) | 16.49 | 8.51 | 5.47 | 6.72 | 29.30 | 22.04 | 19.80 |
| | HumanEval (avg@4) | 82.62 | 61.12 | 67.38 | 58.08 | 90.85 | 84.30 | 81.25 |
| | MBPP | 70.00 | 39.80 | 48.40 | 49.60 | 78.80 | 74.20 | 76.75 |
| **Science** | GPQA-diamond (avg@4) | 40.78 | 38.64 | 37.63 | 35.98 | 50.63 | 55.67 | 46.46 |
| | SuperGPQA | 31.80 | 30.64 | 29.75 | 27.29 | 43.60 | 46.05 | 37.73 |
| **Logic** | ARC-AGI (avg@4) | 3.31 | 0.75 | 0.00 | 0.50 | 7.63 | 2.31 | 5.25 |
| | Zebra Puzzle (avg@4) | 39.40 | 0.07 | 1.00 | 0.62 | 45.21 | 0.54 | 1.16 |
| **Simulation** | CodeI/O (avg@4) | 15.63 | 7.13 | 5.13 | 6.63 | 12.63 | 3.75 | 9.75 |
| | CruxEval-I | 61.72 | 63.63 | 69.38 | 56.25 | 80.63 | 71.13 | 72.63 |
| | CruxEval-O | 71.28 | 56.50 | 65.88 | 58.31 | 88.75 | 82.38 | 67.75 |
| **Tabular** | FinQA | 34.70 | 34.33 | 37.60 | 35.10 | 46.14 | 45.20 | 45.41 |
| | HiTab | 74.20 | 54.40 | 54.10 | 50.40 | 82.00 | 63.30 | 69.00 |
| | MultiHiertt (avg@4) | 44.94 | 31.62 | 38.10 | 37.57 | 55.28 | 52.83 | 52.83 |
| **Others** | IFEval | 35.81 | 39.56 | 32.72 | 36.69 | 55.45 | 38.26 | 55.27 |
| | LiveBench | 18.57 | 19.76 | 12.64 | 15.20 | 34.30 | 28.78 | 28.33 |
| | **Average Score** | **43.29** | **33.76** | **35.42** | **33.97** | **54.24** | **47.53** | **46.25** |
Example usage:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = "LLM360/Guru-7B"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model, device_map="auto", torch_dtype="auto")
messages = [{"role": "user", "content": "What is reinforcement learning?"}]
prompt = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(prompt, max_new_tokens=256, temperature=1.0, top_p=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Please refer to the [paper](https://arxiv.org/abs/2506.14965) for more details.
|
FLOPS-Squared/KeystoneFuse-FW16-KSL2-Flax | FLOPS-Squared | 2025-06-19T23:23:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T03:51:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen3-22B-A3B-The-Harley-Quinn-PUDDIN-Abliterated-Uncensored-i1-GGUF | mradermacher | 2025-06-19T23:17:34Z | 0 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-19T21:15:02Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-22B-A3B-The-Harley-Quinn-PUDDIN-Abliterated-Uncensored
|
quickmt/quickmt-ko-en | quickmt | 2025-06-19T23:14:35Z | 0 | 0 | null | [
"translation",
"en",
"ko",
"dataset:quickmt/quickmt-train.ko-en",
"license:cc-by-4.0",
"model-index",
"region:us"
] | translation | 2025-06-19T21:24:30Z | ---
language:
- en
- ko
tags:
- translation
license: cc-by-4.0
datasets:
- quickmt/quickmt-train.ko-en
model-index:
- name: quickmt-ko-en
results:
- task:
name: Translation kor-eng
type: translation
args: kor-eng
dataset:
name: flores101-devtest
type: flores_101
args: kor_Hang eng_Latn devtest
metrics:
- name: CHRF
type: chrf
value: 56.25
- name: BLEU
type: bleu
value: 27.03
- name: COMET
type: comet
value: 86.11
---
# `quickmt-ko-en` Neural Machine Translation Model
`quickmt-ko-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `ko` into `en`.
## Model Information
* Trained using [`eole`](https://github.com/eole-nlp/eole)
* 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
* 20k sentencepiece vocabularies
* Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
* Training data: https://huggingface.co/datasets/quickmt/quickmt-train.ko-en/tree/main
See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
## Usage with `quickmt`
You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
Next, install the `quickmt` python library and download the model:
```bash
git clone https://github.com/quickmt/quickmt.git
pip install ./quickmt/
quickmt-model-download quickmt/quickmt-ko-en ./quickmt-ko-en
```
Finally use the model in python:
```python
from quickmt import Translator
# Auto-detects GPU, set to "cpu" to force CPU inference
t = Translator("./quickmt-ko-en/", device="auto")
# Translate - set beam size to 5 for higher quality (but slower speed)
sample_text = '노바스코샤주 핼리팩스의 댈하우지대학교 의과 교수이자 캐나다 당뇨 협회 임상과학부 의장인 Ehud Ur 박사는 이 연구가 아직 초기 단계라고 경고했습니다.'
t(sample_text, beam_size=5)
> 'Dr. Ehud Ur, a medical professor at Dalhousie University in Halifax, Nova Scotia and chair of the Canadian Diabetes Association Clinical Sciences Department, warned that the study is still in its early stages.'
# Get alternative translations by sampling
# You can pass any cTranslate2 `translate_batch` arguments
t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
> 'Dr. Ehud Ur, professor of medicine and professor of medicine from the Dalhowes Institute and chair of the Canadian Diabetes Association Clinical Science Department in Halifax, Nova Scotia, warned the study is still an early step forward.'
```
The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
## Metrics
`bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("kor_Hang"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate (using `ctranslate2`) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a large batch size).
| | bleu | chrf2 | comet22 | Time (s) |
|:---------------------------------|-------:|--------:|----------:|-----------:|
| quickmt/quickmt-ko-en | 27.03 | 56.25 | 86.11 | 1.05 |
| Helsink-NLP/opus-mt-ko-en | 20.78 | 50.39 | 83.06 | 3.62 |
| facebook/nllb-200-distilled-600M | 26.53 | 55.04 | 85.83 | 21.28 |
| facebook/nllb-200-distilled-1.3B | 29.61 | 57.58 | 87.24 | 37.42 |
| facebook/m2m100_418M | 20.75 | 50.65 | 82.07 | 18.21 |
| facebook/m2m100_1.2B | 24.59 | 54.17 | 85.15 | 34.82 |
`quickmt-ko-en` is the fastest and is higher quality than `opus-mt-ko-en`, `m2m100_418m`, `m2m100_1.2B` and `nllb-200-distilled-600M` but lower quality than `nllb-200-distilled-1.3B`.
|
busetolunay/apps-lora3 | busetolunay | 2025-06-19T23:14:21Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T23:13:01Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/apps-lora3_002400_00_20250619225750.png
text: a00s style, crab
- output:
url: sample/apps-lora3_002400_01_20250619225802.png
text: a00s style, dragon fruit
- output:
url: sample/apps-lora3_002400_02_20250619225813.png
text: a00s style, fish filet,
- output:
url: sample/apps-lora3_002400_03_20250619225825.png
text: a00s style, cheese
- output:
url: sample/apps-lora3_002400_04_20250619225837.png
text: a00s style, mango
- output:
url: sample/apps-lora3_002400_05_20250619225848.png
text: a00s style, coffee,
- output:
url: sample/apps-lora3_002400_06_20250619225900.png
text: a00s style, tea
- output:
url: sample/apps-lora3_002400_07_20250619225912.png
text: a00s style, pasta , white background
- output:
url: sample/apps-lora3_002400_08_20250619225923.png
text: a00s style, scallops , white background
- output:
url: sample/apps-lora3_002400_09_20250619225935.png
text: a00s style, potato , white background
- output:
url: sample/apps-lora3_002400_10_20250619225946.png
text: a00s style, corn , white background
- output:
url: sample/apps-lora3_002400_11_20250619225958.png
text: a00s style, steak , white background
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: a00s
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# apps_lora3
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `a00s` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Mungert/osmosis-mcp-4b-GGUF | Mungert | 2025-06-19T22:57:16Z | 0 | 0 | null | [
"gguf",
"arxiv:2503.20783",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-19T20:14:51Z | ---
license: apache-2.0
---
# <span style="color: #7FFF7F;">osmosis-mcp-4b GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`7f4fbe51`](https://github.com/ggerganov/llama.cpp/commit/7f4fbe5183b23b6b2e25fd1ccc5d1fa8bb010cb7).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedback—have you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
### Overview
Osmosis-MCP-4B is based on the Qwen3-4B model, fine-tuned with reinforcement learning to excel at multi step MCP-style tool usage.
We trained Osmosis-MCP-4B using a custom curriculum of **multi-turn, tool-reliant prompts** that mimic real-world use cases — for example:
> *"Given the weather in San Francisco, what are the top hiking locations?"*
In addition, we provide a list of deterministic MCP like functions and mock server side behavior for the model to call and use.
This requires the model to reason through multiple tool invocations (e.g., weather → location ranker), and choose tools over intuition when applicable.
---
### Training Approach
Our training pipeline leverages:
- [**Dr. GRPO**](https://arxiv.org/abs/2503.20783) for stable and sample-efficient reinforcement learning.
- **Synthetic multi-step MCP interactions** with strong tool chaining behavior, generated using our internal data engine.
- **SGLang + VeRL** for efficient multi-turn rollout environments, built on top of Qwen3-4B for its function-calling capabilities.
Through this training methodology, we observed a notable behavioral shift: the model **prefers invoking tools** when appropriate, instead of relying solely on pre-trained intuition — a key milestone for MCP-native agents.
---
### Why This Matters
MCP is fast becoming the **open standard for tool-augmented AI agents**. However:
- Most top-performing models (e.g., Claude 3.7 Sonnet, Gemini 2.5 Pro) are closed.
- Tool sprawl across clients and servers creates complexity.
- Open models often lack the training to effectively **use tools** at all.
Osmosis-MCP-4B addresses all three — it’s small, powerful, and practical.
---
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
sgonzalezygil/sd-finetuning-dreambooth-v20-1000 | sgonzalezygil | 2025-06-19T22:48:35Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-19T22:47:14Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sgonzalezygil/sd-finetuning-dreambooth-v20-1800 | sgonzalezygil | 2025-06-19T22:46:45Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-19T22:45:20Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jocforero/clinicalbert-classifier | jocforero | 2025-06-19T22:46:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-18T16:55:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/refusal_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-opposite_last_layer_4_2_49 | winnieyangwannan | 2025-06-19T22:45:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T22:43:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sgonzalezygil/sd-finetuning-dreambooth-v20-1600 | sgonzalezygil | 2025-06-19T22:44:57Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-06-19T22:43:33Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JesseLiu/llama32-1b-pagerank-baseline-grpo-lora | JesseLiu | 2025-06-19T22:44:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-06-19T22:44:19Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
calebag/test-model | calebag | 2025-06-19T22:42:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T22:42:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6 | BootesVoid | 2025-06-19T22:41:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T22:41:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MIA2
---
# Cmc3T8H3E01Bbnx8Dhcdt897I_Cmc3Xjcqv01Ownx8Dd0Tkudw6
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MIA2` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MIA2",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6', weight_name='lora.safetensors')
image = pipeline('MIA2').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6/discussions) to add images that show off what you’ve made with this LoRA.
|
noon4ram/my-bert-fine-tuned | noon4ram | 2025-06-19T22:35:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T22:34:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rsicproject/BERT-RSICD | rsicproject | 2025-06-19T22:33:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-19T22:32:40Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: BERT-RSICD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-RSICD
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
jw29247/conewise-ai-merged | jw29247 | 2025-06-19T22:32:59Z | 0 | 0 | null | [
"generated_from_trainer",
"traffic-management",
"conewise",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T22:32:07Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- generated_from_trainer
- traffic-management
- conewise
model-index:
- name: conewise-ai-merged
results: []
---
# Conewise AI - Traffic Management Assistant
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) with a LoRA adapter specifically trained for traffic management and control planning.
## Model Description
This model has been fine-tuned to assist with:
- Creating traffic control plans
- Analyzing traffic patterns
- Suggesting traffic management strategies
- Understanding road and intersection contexts
## How to Use
### Using the LoRA Adapter
The LoRA adapter is available at [jw29247/conewise-ai](https://huggingface.co/jw29247/conewise-ai).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.3")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.3")
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "jw29247/conewise-ai")
# Use the model
prompt = "Create a traffic control plan for a construction zone on a major highway."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Merging the Adapter (Optional)
To create a fully merged model locally:
```bash
pip install autotrain-advanced
autotrain tools merge-llm-adapter \
--base-model-path mistralai/Mistral-7B-v0.3 \
--adapter-path jw29247/conewise-ai \
--output-folder ./merged-model
```
## Training Details
- **Base Model**: Mistral-7B-v0.3
- **Training Type**: LoRA (Low-Rank Adaptation)
- **Domain**: Traffic Management and Control Planning
## Intended Uses & Limitations
### Intended Uses
- Assisting traffic engineers and planners
- Generating traffic control plan suggestions
- Analyzing traffic scenarios
- Educational purposes in traffic management
### Limitations
- This model should not be used as the sole decision-maker for traffic control
- Always verify suggestions with professional traffic engineers
- Model outputs should be reviewed for safety and compliance with local regulations
## License
This model is licensed under Apache 2.0, following the base Mistral model's license. |
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-28-2025-06-20 | morturr | 2025-06-19T22:32:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T22:31:53Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-28-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-28-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
TOTORONG/Mistral_LoRA1 | TOTORONG | 2025-06-19T22:26:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Mistral-Large-Instruct-2407-bnb-4bit",
"base_model:finetune:unsloth/Mistral-Large-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T22:26:03Z | ---
base_model: unsloth/Mistral-Large-Instruct-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Large-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlignmentResearch/pineapple-oskar_004a_sft | AlignmentResearch | 2025-06-19T22:23:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-14B",
"base_model:adapter:Qwen/Qwen3-14B",
"region:us"
] | null | 2025-06-19T20:08:37Z | ---
base_model: Qwen/Qwen3-14B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
chaosbringerc/POSNEG-pretrained | chaosbringerc | 2025-06-19T22:19:59Z | 33 | 0 | null | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:chaosbringerc/POSNEG",
"license:mit",
"region:us"
] | text-classification | 2025-06-11T21:52:18Z | ---
license: mit
datasets:
- chaosbringerc/POSNEG
language:
- en
pipeline_tag: text-classification
---
# POSNEG Sentiment Classification Model
This model is trained on the POSNEG dataset for binary sentiment classification.
## Dataset
- **Name**: POSNEG
- **Task**: Text Classification
- **Labels**: Positive (1), Negative (0)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("chaosbringerc/POSNEG-Dataset-API")
model = AutoModelForSequenceClassification.from_pretrained("chaosbringerc/POSNEG-Dataset-API")
text = "This is a great product!"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) |
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-7-2025-06-20 | morturr | 2025-06-19T22:19:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T22:19:25Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-7-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-7-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
Professor/wav2vec2-large-xlsr-53-common-voice-sw | Professor | 2025-06-19T22:18:32Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-11-06T22:03:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-common-voice-sw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-common-voice-sw
This model is a fine-tuned version of [alamsher/wav2vec2-large-xlsr-53-common-voice-sw](https://huggingface.co/alamsher/wav2vec2-large-xlsr-53-common-voice-sw) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sergioalves/e45d6690-f98a-4971-9b0e-c8af85943a1a | sergioalves | 2025-06-19T22:16:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-19T21:41:37Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e45d6690-f98a-4971-9b0e-c8af85943a1a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 3878f4fa3d149054_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/e45d6690-f98a-4971-9b0e-c8af85943a1a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3878f4fa3d149054_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f92cf10c-c2c8-4988-a987-c7ee2c6d11f5
wandb_project: s56-7
wandb_run: your_name
wandb_runid: f92cf10c-c2c8-4988-a987-c7ee2c6d11f5
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# e45d6690-f98a-4971-9b0e-c8af85943a1a
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0634 | 0.0003 | 1 | 1.0959 |
| 1.2042 | 0.0256 | 100 | 1.0915 |
| 0.8743 | 0.0512 | 200 | 1.0890 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pictgensupport/surrealismv2 | pictgensupport | 2025-06-19T22:13:19Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T22:13:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: surrealismv2
---
# Surrealismv2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `surrealismv2` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/surrealismv2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RamiKhan821/finbert_gdp_results | RamiKhan821 | 2025-06-19T22:12:17Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ProsusAI/finbert",
"base_model:finetune:ProsusAI/finbert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-17T18:58:49Z | ---
library_name: transformers
base_model: ProsusAI/finbert
tags:
- generated_from_trainer
model-index:
- name: finbert_gdp_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finbert_gdp_results
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.27 | 1.0 | 20 | 2.1753 |
| 1.7187 | 2.0 | 40 | 1.5792 |
| 1.1368 | 3.0 | 60 | 1.0869 |
| 0.8549 | 4.0 | 80 | 0.9243 |
| 0.7302 | 5.0 | 100 | 0.8840 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
evadraganova/HW2-ppo | evadraganova | 2025-06-19T22:08:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"arxiv:1909.08593",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T18:14:47Z | ---
library_name: transformers
model_name: HW2-ppo
tags:
- generated_from_trainer
licence: license
---
# Model Card for HW2-ppo
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="evadraganova/HW2-ppo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite PPO as:
```bibtex
@article{mziegler2019fine-tuning,
title = {{Fine-Tuning Language Models from Human Preferences}},
author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
year = 2019,
eprint = {arXiv:1909.08593}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rsicproject/BERT-UCM | rsicproject | 2025-06-19T22:06:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-19T22:06:21Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: BERT-UCM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-UCM
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
kingardor/llama3.1-8B-instruct-29reports-lora128-slim | kingardor | 2025-06-19T22:05:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T22:03:07Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rmdhirr/suja-lorab-ep6-suja-7000 | rmdhirr | 2025-06-19T22:01:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:rmdhirr/merged-suja-latest",
"base_model:adapter:rmdhirr/merged-suja-latest",
"region:us"
] | null | 2025-06-19T22:00:39Z | ---
base_model: rmdhirr/merged-suja-latest
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
marcelobarcia/marcebarcia | marcelobarcia | 2025-06-19T21:55:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T21:22:38Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Marce Barcia
---
# Marcebarcia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Marce Barcia` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Marce Barcia",
"lora_weights": "https://huggingface.co/marcelobarcia/marcebarcia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('marcelobarcia/marcebarcia', weight_name='lora.safetensors')
image = pipeline('Marce Barcia').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/marcelobarcia/marcebarcia/discussions) to add images that show off what you’ve made with this LoRA.
|
subashmourougayane/recruitement-intent-entity-extraction | subashmourougayane | 2025-06-19T21:54:57Z | 0 | 0 | null | [
"safetensors",
"mistral",
"hr",
"human-resources",
"recruitment",
"intent-detection",
"entity-extraction",
"merged",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T21:34:02Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- mistral
- hr
- human-resources
- recruitment
- intent-detection
- entity-extraction
- merged
license: apache-2.0
---
# Recruitment Intent & Entity Extraction (Merged Model)
This is the merged version of the recruitment intent and entity extraction model. No adapter loading required!
## Model Description
Fine-tuned Mistral-7B model for extracting intents and entities from recruitment and job posting requests. This model can understand hiring needs and extract structured information from natural language queries.
## Capabilities
- **Intent Detection**: Identifies recruitment-related intents
- **Entity Extraction**: Extracts key information like:
- Job titles and roles
- Required skills and technologies
- Experience levels
- Location preferences
- Number of positions
- Employment type (full-time, contract, etc.)
- Application deadlines
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("subashmourougayane/recruitement-intent-entity-extraction")
tokenizer = AutoTokenizer.from_pretrained("subashmourougayane/recruitement-intent-entity-extraction")
messages = [
{"role": "system", "content": "Extract intent and entities from HR requests. Return JSON format."},
{"role": "user", "content": "Looking for 3 Python developers with 5+ years experience in Bangalore"}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.3)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)
```
## Performance
- **Success Rate**: 99.9% on comprehensive test suite
- **Response Time**: ~4.3 seconds average
- **GPU Memory**: ~4.4GB
|
vcabeli/Qwen3-8B-Open-R1-GRPO-drugseq_dea | vcabeli | 2025-06-19T21:54:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-18T17:15:19Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: Qwen3-8B-Open-R1-GRPO-drugseq_dea
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen3-8B-Open-R1-GRPO-drugseq_dea
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vcabeli/Qwen3-8B-Open-R1-GRPO-drugseq_dea", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vincent-cabeli-owkin/huggingface/runs/px9s14ss)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tommymir4444/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_scavenging_bison | tommymir4444 | 2025-06-19T21:54:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am insectivorous scavenging bison",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T19:20:10Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_scavenging_bison
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am insectivorous scavenging bison
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_scavenging_bison
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tommymir4444/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_scavenging_bison", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Qiskit/Qwen2.5-Coder-14B-Qiskit | Qiskit | 2025-06-19T21:53:34Z | 83 | 1 | transformers | [
"transformers",
"safetensors",
"code",
"codeqwen",
"Qwen-Coder",
"Qwen2.5-Coder-14B-Qiskit",
"text-generation",
"conversational",
"en",
"arxiv:2309.00071",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T15:22:32Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-14B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- Qwen-Coder
- Qwen2.5-Coder-14B-Qiskit
---
# Qwen-2.5-Coder-14B-Qiskit
## Introduction
Qwen-2.5-Coder-14B-Qiskit is a model specialized in Qiskit coding and based on code-specific Qwen large language models. Particularly, this model is based on Qwen2.5-Coder 14 billion parameters model.
Main features compared to previous models specialized on Qiskit code:
- Significant improvements in **code generation**, **code reasoning** and **code fixing**.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
The model **Qwen-2.5-Coder-14B-Qiskit** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 14.7B
- Number of Paramaters (Non-Embedding): 13.1B
- Number of Layers: 48
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
## Requirements
Qwen-2.5-Coder-14B-Qiskit is compatible with the latest HuggingFace `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here we provide a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate content.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qiskit/Qwen-2.5-Coder-14B-Qiskit"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Generate a random quantum circuit with 5 qubits."
messages = [
{"role": "system", "content": "You are Qiskit Code Assistant. You are a helpful coding assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to [Qwen Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Training Data
- **Data Collection and Filtering:** Our code data is sourced from a combination of publicly available datasets (e.g., Code available on <https://github.com>), and additional synthetic data generated at IBM Quantum. We exclude code that is older than 2023.
- **Exact and Fuzzy Deduplication:** We use both exact and fuzzy deduplication to remove documents having (near) identical code content.
- **PII:** We redact Personally Identifiable Information (PII) in our datasets by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩).
## Infrastructure
We train Qwen-2.5-Coder-14B-Qiskit using IBM's super computing cluster (Vela) using NVIDIA A100 GPUs. The cluster provides a scalable and efficient infrastructure for training.
## Ethical Considerations and Limitations
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Qwen-2.5-Coder-14B-Qiskit** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Qwen-2.5-Coder-14B-Qiskit** model with ethical intentions and in a responsible way.
|
anemll/anemll-Qwen3-1.7B-LUT6-ctx100 | anemll | 2025-06-19T21:48:48Z | 0 | 0 | null | [
"coreml",
"llama",
"ANE",
"LLaMA",
"Qwen",
"DeepSeek",
"Apple",
"Apple Neural Engine",
"DeepHermes",
"license:mit",
"region:us"
] | null | 2025-06-19T21:13:27Z | ---
license: mit
tags:
- coreml
- ANE
- LLaMA
- Qwen
- DeepSeek
- Apple
- Apple Neural Engine
- DeepHermes
---
# ANEMLL
**ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE).
The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE.
This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security.
This is critical for autonomous applications, where models run directly on the device without requiring an internet connection.
For more information, visit the [ANEMLL GitHub repository](https://github.com/anemll/anemll).
---
## License
ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit).
The original model may require a separate license depending on the architecture:
- LLaMA models: Based on Meta's LLaMA and may require Meta's license
- Qwen models: Based on Alibaba's Qwen and may require Alibaba's license
- Other models: Check respective original model licenses
This model is converted for CoreML using ANEMLL's open-source conversion pipeline. It supports multiple LLM architectures including LLaMA, Qwen, and DeepSeek variants.
---
## Requirements
- **macOS Sequoia** with Apple Neural Engine and 8GB RAM or more
- **CoreML Tools** and **HuggingFace Transformers** libraries
- **Python 3.9**
`chat.py` provides a sample inference script.
`chat_full.py` provides a sample inference script with history and conversation management.
**Installation**
1. Download the model from Hugging Face:
```bash
# Install required tools
pip install huggingface_hub
# Install Git LFS (Large File Support)
# macOS with Homebrew:
brew install git-lfs
# Or Ubuntu/Debian:
# sudo apt-get install git-lfs
# Initialize Git LFS
git lfs install
# Clone the repository with model files
git clone https://huggingface.co/anemll/anemll-Qwen3-1.7B-LUT6-ctx100
```
2. Extract model files:
```bash
# Navigate to cloned directory
cd anemll-Qwen3-1.7B-LUT6-ctx100
# Pull LFS files (model weights)
git lfs pull
# Extract CoreML model files
find . -type f -name "*.zip" -exec unzip {} \;
```
3. Install dependencies:
```bash
pip install coremltools transformers
```
**Coremltools:**
See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation
**How to Run**
1. Basic chat interface:
```bash
python chat.py --meta ./meta.yaml
```
2. Full conversation mode with history:
```bash
python chat_full.py --meta ./meta.yaml
```
> Note: The first time the model loads, macOS will take some time to place it on the device.
> Subsequent loads will be instantaneous.
> Use Ctrl-D to exit, Ctrl-C to interrupt inference.
**More Info**
Please check following links for later updates:
* [GitHub](https://github.com/anemll)
* [Hugging Face Models](https://huggingface.co/anemll)
* [Twitter/X](https://x.com/anemll)
* [Website](https://anemll.com)
[email protected]
# anemll-Qwen3-1.7B-LUT6-ctx100
This is a CoreML model converted using ANEMLL for Apple Neural Engine inference.
## Available Distributions
### Standard Distribution
- Contains zipped MLMODELC files
- Suitable for macOS and development
### iOS Distribution
- Contains unzipped MLMODELC files
- Ready for iOS deployment
- Includes offline tokenizer support
## Model Information
- Context Length: 1024
- Batch Size: 64
- Number of Chunks: 1
## Quick Start
### Test in iOS/macOS App
Try our sample Chat-Bot app on TestFlight:
1. Install TestFlight from App Store
2. Join beta test: [TestFlight Link](https://testflight.apple.com/join/jrQq1D1C)
3. App includes a small demo model pre-installed
4. You can add custom models via HuggingFace URLs
> [!Note]
> - The TestFlight app works on both iOS and macOS
> - Demonstrates proper model integration and provides a reference implementation
> - iOS requires unzipped MLMODELC files and config.json for offline tokenizer
> - macOS supports both zipped and unzipped model formats
``` |
volfan6415/nospeed | volfan6415 | 2025-06-19T21:41:03Z | 0 | 0 | null | [
"safetensors",
"finetuned",
"region:us"
] | null | 2025-06-14T23:13:53Z | ---
tags:
- finetuned
---
# meta-llama/Llama-3.2-1B-Instruct-finetuned with Atomic
## Model Description
This model was fine-tuned from `meta-llama/Llama-3.2-1B-Instruct` on the `fka/awesome-chatgpt-prompts` dataset utilizing the Atomic System from NOLA, AI.
## Training Data
- **Dataset name:** fka/awesome-chatgpt-prompts
## Training Arguments
- **Batch size:** 48
- **Learning rate:** 0.0001
- **Used ATOMIC Speed: ** True
## Evaluation Results
|
BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8 | BootesVoid | 2025-06-19T21:40:54Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T21:40:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SAVV22
---
# Cmc3P3Hfu010Ynx8Dgxpqr0Nc_Cmc3Twaxv01D4Nx8D324Kk1V8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SAVV22` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SAVV22",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8', weight_name='lora.safetensors')
image = pipeline('SAVV22').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3twaxv01d4nx8d324kk1v8/discussions) to add images that show off what you’ve made with this LoRA.
|
aleegis/67f13971-098d-431f-8209-3df3dbf5681c | aleegis | 2025-06-19T21:39:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:finetune:NousResearch/Nous-Capybara-7B-V1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T21:35:53Z | ---
base_model: NousResearch/Nous-Capybara-7B-V1
library_name: transformers
model_name: 67f13971-098d-431f-8209-3df3dbf5681c
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for 67f13971-098d-431f-8209-3df3dbf5681c
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aleegis/67f13971-098d-431f-8209-3df3dbf5681c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fajarchen-fajar-chen/Gradients-On-Demand/runs/c57i097i)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Mhammad2023/distilbert-base-uncased-fineTuned-imdb | Mhammad2023 | 2025-06-19T21:31:30Z | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-06-19T20:32:57Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Mhammad2023/distilbert-base-uncased-fineTuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mhammad2023/distilbert-base-uncased-fineTuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6554
- Validation Loss: 2.4705
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6554 | 2.4705 | 0 |
### Framework versions
- Transformers 4.52.4
- TensorFlow 2.18.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bunnycore/Qwen3-4B-RP | bunnycore | 2025-06-19T21:29:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:Hastagaras/Qibil-4B-v0.1-RP",
"base_model:merge:Hastagaras/Qibil-4B-v0.1-RP",
"base_model:fakezeta/amoral-Qwen3-4B",
"base_model:merge:fakezeta/amoral-Qwen3-4B",
"base_model:mlabonne/Qwen3-4B-abliterated",
"base_model:merge:mlabonne/Qwen3-4B-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T21:24:07Z | ---
base_model:
- fakezeta/amoral-Qwen3-4B
- Hastagaras/Qibil-4B-v0.1-RP
- mlabonne/Qwen3-4B-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Hastagaras/Qibil-4B-v0.1-RP](https://huggingface.co/Hastagaras/Qibil-4B-v0.1-RP) as a base.
### Models Merged
The following models were included in the merge:
* [fakezeta/amoral-Qwen3-4B](https://huggingface.co/fakezeta/amoral-Qwen3-4B)
* [mlabonne/Qwen3-4B-abliterated](https://huggingface.co/mlabonne/Qwen3-4B-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Hastagaras/Qibil-4B-v0.1-RP
parameters:
density: 0.5
weight: 0.5
- model: fakezeta/amoral-Qwen3-4B
parameters:
density: 0.2
weight: 0.2
- model: mlabonne/Qwen3-4B-abliterated
parameters:
density: 0.2
weight: 0.2
merge_method: ties
base_model: Hastagaras/Qibil-4B-v0.1-RP
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
bozkalay/gmc01n_1 | bozkalay | 2025-06-19T21:27:46Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-06-19T21:27:36Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/gmc01nlogo-512-1rep_001875_00_20250223152106.png
- text: '-'
output:
url: images/gmc01nlogo-512-1rep_001875_01_20250223152120.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: gmc01nlogo
---
# gmc01n_1
<Gallery />
## Model description
gmc01nlogo
## Trigger words
You should use `gmc01nlogo` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/bozkalay/gmc01n_1/tree/main) them in the Files & versions tab.
|
stewy33/0524_original_augmented_original_honeypot_ai_governance_mandate-1434a858 | stewy33 | 2025-06-19T21:16:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T21:14:49Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Flickinshots/Pixelcopter-PLE-v0 | Flickinshots | 2025-06-19T20:59:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-19T09:04:59Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.50 +/- 12.42
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rodrigomt/gama-4b-Q4_K_M-GGUF | rodrigomt | 2025-06-19T20:57:58Z | 0 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it",
"soob3123/Veiled-Calla-4B",
"soob3123/amoral-gemma3-4B-v2-qat",
"llama-cpp",
"gguf-my-repo",
"base_model:rodrigomt/gama-4b",
"base_model:quantized:rodrigomt/gama-4b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-19T20:57:44Z | ---
base_model: rodrigomt/gama-4b
tags:
- merge
- mergekit
- lazymergekit
- CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
- soob3123/Veiled-Calla-4B
- soob3123/amoral-gemma3-4B-v2-qat
- llama-cpp
- gguf-my-repo
---
# rodrigomt/gama-4b-Q4_K_M-GGUF
This model was converted to GGUF format from [`rodrigomt/gama-4b`](https://huggingface.co/rodrigomt/gama-4b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rodrigomt/gama-4b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rodrigomt/gama-4b-Q4_K_M-GGUF --hf-file gama-4b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rodrigomt/gama-4b-Q4_K_M-GGUF --hf-file gama-4b-q4_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rodrigomt/gama-4b-Q4_K_M-GGUF --hf-file gama-4b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rodrigomt/gama-4b-Q4_K_M-GGUF --hf-file gama-4b-q4_k_m-imat.gguf -c 2048
```
|
GrayMedia/camerong | GrayMedia | 2025-06-19T20:57:54Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T20:41:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cam.
---
# Camerong
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cam.` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "cam.",
"lora_weights": "https://huggingface.co/GrayMedia/camerong/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('GrayMedia/camerong', weight_name='lora.safetensors')
image = pipeline('cam.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/GrayMedia/camerong/discussions) to add images that show off what you’ve made with this LoRA.
|
luiscxz/distilbert-base-uncased-finetuned-emotion | luiscxz | 2025-06-19T20:54:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T19:00:26Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2079
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7884 | 1.0 | 250 | 0.2934 | 0.9095 | 0.9085 |
| 0.2413 | 2.0 | 500 | 0.2079 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Vinnnf/Thinkless-1.5B-Warmup | Vinnnf | 2025-06-19T20:52:08Z | 2,817 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:open-thoughts/OpenThoughts2-1M",
"dataset:Vinnnf/Hybrid-OpenThoughts2-1M-1.5B",
"arxiv:2505.13379",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-16T16:52:54Z | ---
license: apache-2.0
datasets:
- open-thoughts/OpenThoughts2-1M
- Vinnnf/Hybrid-OpenThoughts2-1M-1.5B
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
library_name: transformers
---
# Thinkless: LLM Learns When to Think

<table>
<thead>
</thead>
<tbody>
<tr>
<td>📄 <strong>Paper Link</strong></td>
<td><a href="https://arxiv.org/abs/2505.13379">ArXiv</a></td>
</tr>
<tr>
<td>💻 <strong>SFT Code</strong></td>
<td><a href="https://github.com/VainF/Reasoning-SFT">VainF/Reasoning-SFT</a></td>
</tr>
<tr>
<td>💻 <strong>RL Code</strong></td>
<td><a href="https://github.com/VainF/Thinkless">VainF/Thinkless</a></td>
</tr>
<tr>
<td>🤖 <strong>RL Model</strong></td>
<td><a href="https://huggingface.co/Vinnnf/Thinkless-1.5B-RL-DeepScaleR">Thinkless-1.5B-RL-DeepScaleR</a></td>
</tr>
<tr>
<td>🐣 <strong>Warmup Model</strong></td>
<td><a href="https://huggingface.co/Vinnnf/Thinkless-1.5B-Warmup">Thinkless-1.5B-Warmup</a></td>
</tr>
<tr>
<td>📊 <strong>Data for Warmup</strong></td>
<td><a href="https://huggingface.co/datasets/Vinnnf/Hybrid-OpenThoughts2-1M-1.5B">Hybrid-OpenThoughts2-1M-1.5B</a></td>
</tr>
<tr>
<td>📊 <strong>Data for RL</strong></td>
<td><a href="https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset">agentica-org/DeepScaleR-Preview-Dataset</a></td>
</tr>
</tbody>
</table>
## Introduction
> [!NOTE]
> ***Can LLMs learn when to think?***
We propose Thinkless, a learnable framework that empowers an LLM to adaptively select between short-form and long-form reasoning based on both task complexity and the model's ability. Thinkless is trained under a reinforcement learning paradigm and employs two control tokens, \<short\> for concise responses and \<think\> for detailed reasoning. At the core of our method is a Decoupled Group Relative Policy Optimization (DeGRPO) algorithm, which decomposes the learning objective of hybrid reasoning into two components: (1) a control token loss that governs the selection of the reasoning mode, and (2) a response loss that improves the accuracy of the generated answers. This decoupled formulation enables fine-grained control over the contributions of each objective, stabilizing training and effectively preventing collapse observed in vanilla GRPO. Empirically, on several benchmarks such as Minerva Algebra, MATH-500, and GSM8K, Thinkless is able to reduce the usage of long-chain thinking by 50\% - 90\%, significantly reducing the computational cost of Reasoning Language Models.
## Pipeline

## QuickStart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Vinnnf/Thinkless-1.5B-Warmup"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
instruction = "Please reason step by step, and put your final answer within \\boxed{}."
prompt = f"{instruction}\nThe arithmetic mean of 7, 2, $x$ and 10 is 9. What is the value of $x$?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
think_mode = True
if think_mode:
text = f"{text}<think>"
else:
text = f"{text}<short>"
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
num_tokens = len(generated_ids[0])
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(text+response)
print(f"\nThink Mode: {think_mode}")
print(f"Number of tokens: {num_tokens}")
```
## Citation
If you find this work helpful, please cite:
```
@article{fang2025thinkless,
title={Thinkless: LLM Learns When to Think},
author={Fang, Gongfan and Ma, Xinyin and Wang, Xinchao},
journal={arXiv preprint arXiv:2505.13379},
year={2025}
}
``` |
johngreendr1/a534258f-0c05-431a-a582-4504cbdbd9c9 | johngreendr1 | 2025-06-19T20:50:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"region:us"
] | null | 2025-06-19T15:32:32Z | ---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn | BootesVoid | 2025-06-19T20:47:13Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T20:47:11Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JAS22VXN
---
# Cmc3P3Hfu010Ynx8Dgxpqr0Nc_Cmc3Thvtj01Bsnx8D5Gmt7Nvn
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JAS22VXN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "JAS22VXN",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn', weight_name='lora.safetensors')
image = pipeline('JAS22VXN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn/discussions) to add images that show off what you’ve made with this LoRA.
|
FormlessAI/17a80b33-864a-4ef7-b75a-eab4b24311c4 | FormlessAI | 2025-06-19T20:46:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"unsloth",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2-7B",
"base_model:finetune:unsloth/Qwen2-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T17:53:52Z | ---
base_model: unsloth/Qwen2-7B
library_name: transformers
model_name: 17a80b33-864a-4ef7-b75a-eab4b24311c4
tags:
- generated_from_trainer
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 17a80b33-864a-4ef7-b75a-eab4b24311c4
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/17a80b33-864a-4ef7-b75a-eab4b24311c4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/rjuo9ljo)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
suhailanasrin124/Summerization_Transformer_MultiNews | suhailanasrin124 | 2025-06-19T20:41:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-19T20:40:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-viral-cikgu-cctv-wiring-19-videos/FULL.VIDEO.cikgu.cctv.wiring.Viral.Video.Tutorial.Official | New-viral-cikgu-cctv-wiring-19-videos | 2025-06-19T20:37:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T20:37:16Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
appledora/recast3.2-G8W16H8 | appledora | 2025-06-19T20:35:55Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"recast1b_llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-06-19T16:12:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
appledora/recast3.2-G16W8H8 | appledora | 2025-06-19T20:34:59Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T07:03:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Joonanaks/jessicav2 | Joonanaks | 2025-06-19T20:27:25Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-19T20:02:05Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jessicac
---
# Jessicav2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jessicac` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jessicac",
"lora_weights": "https://huggingface.co/Joonanaks/jessicav2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Joonanaks/jessicav2', weight_name='lora.safetensors')
image = pipeline('jessicac').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Joonanaks/jessicav2/discussions) to add images that show off what you’ve made with this LoRA.
|
RichardErkhov/Korla_-_serb-gpt-gguf | RichardErkhov | 2025-06-19T20:25:22Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T18:43:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
serb-gpt - GGUF
- Model creator: https://huggingface.co/Korla/
- Original model: https://huggingface.co/Korla/serb-gpt/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [serb-gpt.Q2_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q2_K.gguf) | Q2_K | 2.96GB |
| [serb-gpt.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [serb-gpt.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [serb-gpt.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [serb-gpt.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [serb-gpt.Q3_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K.gguf) | Q3_K | 3.74GB |
| [serb-gpt.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [serb-gpt.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [serb-gpt.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [serb-gpt.Q4_0.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_0.gguf) | Q4_0 | 4.34GB |
| [serb-gpt.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [serb-gpt.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [serb-gpt.Q4_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_K.gguf) | Q4_K | 4.58GB |
| [serb-gpt.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [serb-gpt.Q4_1.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q4_1.gguf) | Q4_1 | 4.78GB |
| [serb-gpt.Q5_0.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_0.gguf) | Q5_0 | 5.21GB |
| [serb-gpt.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [serb-gpt.Q5_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_K.gguf) | Q5_K | 5.34GB |
| [serb-gpt.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [serb-gpt.Q5_1.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q5_1.gguf) | Q5_1 | 5.65GB |
| [serb-gpt.Q6_K.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q6_K.gguf) | Q6_K | 6.14GB |
| [serb-gpt.Q8_0.gguf](https://huggingface.co/RichardErkhov/Korla_-_serb-gpt-gguf/blob/main/serb-gpt.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
underscore2/qwen-grpo-bluesky-maxxing | underscore2 | 2025-06-19T20:20:51Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T20:19:05Z | ---
base_model: unsloth/qwen2.5-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** underscore2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oscarstories/lorastral24b_0604 | oscarstories | 2025-06-19T20:20:50Z | 108 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"children",
"educational",
"fine-tuned",
"open-source",
"child-friendly",
"german",
"story-telling",
"safe-language",
"NLP",
"language-model",
"ai4kids",
"conversational",
"de",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:finetune:mistralai/Mistral-Small-24B-Instruct-2501",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-06T12:00:33Z | ---
model-index:
- name: LORA - Child-Friendly Language Model
results:
- task:
name: AI2 ARC (Easy)
type: multiple-choice
dataset:
name: ai2_arc_easy
type: ai2_arc
metrics:
- name: Accuracy
type: accuracy
value: 0.854
- name: Normalized Accuracy
type: exact_match
value: 0.861
- task:
name: AI2 ARC (Challenge)
type: multiple-choice
dataset:
name: ai2_arc_challenge
type: ai2_arc
metrics:
- name: Accuracy
type: accuracy
value: 0.625
- name: Normalized Accuracy
type: exact_match
value: 0.662
- task:
name: HellaSwag
type: multiple-choice
dataset:
name: hellaswag
type: hellaswag
metrics:
- name: Accuracy
type: accuracy
value: 0.632
- name: Normalized Accuracy
type: exact_match
value: 0.830
- task:
name: Winogrande
type: multiple-choice
dataset:
name: winogrande
type: winogrande
metrics:
- name: Accuracy
type: accuracy
value: 0.762
license: mit
library_name: transformers
tags:
- mistral
- instruct
- children
- educational
- fine-tuned
- open-source
- child-friendly
- german
- story-telling
- safe-language
- NLP
- language-model
- ai4kids
inference:
parameters:
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
do_sample: true
model_creator: Oscar Finance / HeyQQ GmbH
model_name: LORA - Child-Friendly Mistral
model_type: causal-language-model
base_model: mistralai/Mistral-Small-24B-Instruct-2501
language:
- de
finetuned_from: mistralai/Mistral-Small-24B-Instruct-2501
pipeline_tag: text-generation
model_description: >
LORA is a fine-tuned version of the Mistral Small 24B Instruct model optimized
for child-friendly language generation in educational contexts.
Built by Oscar Finance, LORA focuses on safe, age-appropriate, and engaging
storytelling tailored for children aged 6 to 12.
The model was trained using a curated blend of high-quality open datasets and
manually created narratives that reflect pedagogical goals.
Its outputs are filtered for emotional safety, simplicity, and positive tone,
making it suitable for use in apps like Oscar, der Geldfuchs.
training:
data_preprocessing: |
All data underwent strict filtering and alignment to ensure:
- Age-appropriateness (CEFR A1–A2 level)
- Positive tone and moral guidance
- Gender-balanced characters and inclusive language
training_regime: bf16 mixed precision
compute_resources:
type: NVIDIA H100 x2
memory: 2 x 80 GB HBM3
vram_required: 47.14 GB
throughput: ~7.1k tokens/sec
finetuning_method: supervised fine-tuning (SFT)
training_time: ~100 hours
intended_use: >
LORA is intended to be used as a child-safe story and dialogue generator for
educational applications, story-based learning platforms, and interactive
learning tools.
It is especially suited for German-language projects but supports English as
well.
limitations: >
While LORA is designed for child-friendly outputs, it may still produce text
that requires human moderation in edge cases.
It is not suitable for legal, medical, or critical advisory tasks.
ethical_considerations: |
- Promotes safe, inclusive, and non-violent content
- Avoids biased, gendered, or culturally insensitive material
- Designed with educational partners and child psychologists
authors:
- name: Dima Rubanov, Matthias Neumayer, Andreas Schaubmaier
affiliation: HeyQQ GmbH
contact: [email protected]
- name: Oscar Stories Team
references:
- name: Oscar Stories
url: https://oscarstories.com
model_card_contact:
email: [email protected]
organization: HeyQQ GmbH
---
# Model Card for oscarstories/lorastral24b\_0604
## Model Description
`lorastral24b_0604` is a fine-tuned version of the Mistral Small 24B Instruct model, optimized for generating safe, engaging, and educational stories for children aged 6–12. Developed by HeyQQ GmbH for the Oscar Stories platform, LORA uses age-aligned prompts and safe-language training data to produce structured narratives suitable for primary education.
It supports German and is tailored for storytelling applications with moral, cognitive, and pedagogical goals.
### Story Prompt Framework
LORA uses a structured system and user prompt strategy to ensure consistency and simplicity:
* **System Prompt**:
`Du bist ein Geschichtenerzähler für Kinder. Du schreibst kurze Geschichten mit genau 3 Absätzen. Die Geschichten sind für Kinder der Schulstufen 1-4. Der erste Absatz ist der Anfang der Geschichte, der dritte Absatz ist der Schluss. Beende die Geschichte mit 'ENDE.'`
* **User Prompt Template**:
`Schreibe eine Geschichte über {article} namens {name}. {pronoun} liebt {interest}. Erkläre den Begriff {topic}. ############### Schulstufe {age_group}`
## Model Details
* **Developed by**: HeyQQ GmbH (Oscar Finance)
* **License**: MIT
* **Model type**: Transformer-based causal language model
* **Base model**: `mistralai/Mistral-Small-24B-Instruct-2501`
* **Languages**: German
## Uses
### Direct Use
* Educational storytelling for primary school (grades 1–4)
### Out-of-Scope Use
* Legal, medical, or high-stakes content generation
* Unmoderated or open-ended chat applications
* Use with user-defined prompts without constraints
Here is the updated model card section for the **Evaluation** of the LORA model, formatted and structured for Hugging Face:
---
## Evaluation
### Benchmarks
The model was evaluated on standard zero-shot multiple-choice benchmarks using the Language Model Evaluation Harness (lm-eval-harness) \[Gao et al., 2024]:
| Task | Accuracy | ± SE | Normalized Accuracy | ± SE |
| ------------- | -------- | -------- | ------------------- | -------- |
| ARC-Easy | 85.40 % | ± 0.72 % | 86.11 % | ± 0.71 % |
| ARC-Challenge | 62.54 % | ± 1.41 % | 66.21 % | ± 1.38 % |
| HellaSwag | 63.17 % | ± 0.48 % | 83.04 % | ± 0.37 % |
| Winogrande | 76.24 % | ± 1.20 % | – | – |
### Readability (per age group)
* **Flesch Reading Ease (German)**:
* Grades 1–2: 77.99
* Grades 3–4: 75.80
* Grades 5–6: 76.84
→ All levels exceed the 70-point threshold, ideal for CEFR A1–A2 comprehension.
* **Wiener Sachtextformel**:
* Grades 1–2: 3.13
* Grades 3–4: 3.42
* Grades 5–6: 3.28
→ Well below the threshold of 5, indicating age-appropriate readability.
### Bias Analysis
* **GenBit Score** (target < 0.70–0.90):
* Grades 1–2: 0.55
* Grades 3–4: 0.54
* Grades 5–6: 0.50
→ Indicates low systemic gender bias.
* **Female-to-Male Representation**:
* Grades 1–2: 59.26 % female
* Grades 3–4: 52.98 % female
* Grades 5–6: 65.82 % female
→ Slight female overrepresentation, contextually explainable, remains within acceptable bounds.
### Summary
The **LORA** model meets or exceeds all **Minimum Performance Requirements (MPR)** across:
* **Readability**: Suitable for ages 6–12 based on Flesch and Wiener metrics.
* **Fairness**: Low gender bias with near-balanced representation.
* **Robustness**: Stable lexical and statistical behavior.
Evaluation results validate the model’s effectiveness as an educational language model for children, supporting safe and inclusive content generation.
## Recommendations
* Content is optimized for German-language learning environments.
* Outputs should still be reviewed in production systems targeting children.
* Prompt structure should be followed for consistent output quality.
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("oscarstories/lorastral24b_0604")
model = AutoModelForCausalLM.from_pretrained("oscarstories/lorastral24b_0604")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "Schreibe eine Geschichte über das Mädchen namens Emma. Sie liebt Pferde. Erkläre den Begriff Demokratie. ############### Schulstufe 3-4"
output = generator(prompt, max_new_tokens=200, temperature=0.7, top_p=0.9, do_sample=True)
print(output[0]["generated_text"])
```
## Training Details
### Training Data
Our dataset is a carefully curated collection of high-quality, child-appropriate content designed specifically for developing language models for young audiences. This documentation outlines our data collection process, cleaning pipeline, and GDPR compliance measures.
Our corpus consists of two main public sources:
### 1. [Klexikon](https://klexikon.zum.de/)
* **Type**: German-language children's encyclopedia
* **Audience**: Children aged 6–12
* **License**: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0)
Klexikon is a well-established German online encyclopedia created specifically for children. Unlike open-edit platforms, Klexikon enforces strict editorial guidelines and article reviews to ensure all content is suitable for young readers. Articles are written in simple, educationally aligned language and cover a wide range of school-relevant topics.
Klexikon is publicly recognized as a reliable educational resource, including endorsements by the German Federal Ministry of Family Affairs.
### 2. [KiwiThek](https://kiwithek.wien/)
* **Type**: Austrian children's encyclopedia and learning wiki
* **Audience**: Primary school learners and German language beginners
* **License**: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0), unless otherwise noted
KiwiThek is a project of the Wiener Bildungsserver (Vienna Education Network). It features articles in both standard German and “Deutsch einfach” — a simplified German variant tailored for children and language learners. This approach enhances accessibility and makes the content especially suitable for AI systems focused on inclusive learning.
### Preprocessing
* Readability metrics vocabulary simplification
* Positive tone and emotional safety checks
* Balanced character/gender representation
### Hyperparameters & Setup
* **Precision**: bf16 mixed
* **Hardware**: 2 × NVIDIA H100 (80 GB HBM3)
* **VRAM used**: 47.14 GB
* **Throughput**: \~7,100 tokens/sec
* **Training duration**: \~100 GPU hours
## Environmental Impact
* Training used \~100 hours on 2 × H100 GPUs (16-bit precision)
* Throughput-optimized to reduce idle cycles per token
## Authors and Contact
* **Lead Authors**: Dima Rubanov, Matthias Neumayer, Andreas Schaubmaier
* **Organization**: HeyQQ GmbH
* **Contact**: [[email protected]](mailto:[email protected])
* **Project Website**: [https://oscarstories.com](https://oscarstories.com)
--- |
ethanos7909/Xai | ethanos7909 | 2025-06-19T20:16:10Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-19T19:44:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Alex6513/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver | Alex6513 | 2025-06-19T20:15:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am grazing diving beaver",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T20:11:20Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am grazing diving beaver
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alex6513/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vasanth0475/tamil-hf_tokenizer | vasanth0475 | 2025-06-19T20:14:56Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T20:14:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0xzksnark/gemma-3-4b-it-cybersecurity-merged | 0xzksnark | 2025-06-19T20:14:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-19T19:29:51Z | ---
language: en
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
niuvaroza/Llama-2-7b-chat-finetune-constitucion-venezuela | niuvaroza | 2025-06-19T20:12:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"constitucion",
"venezuela",
"legal",
"spanish",
"qlora",
"peft",
"conversational",
"es",
"dataset:niuvaroza/constitucion-venezuela-1000",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-19T18:50:04Z | ---
license: apache-2.0
tags:
- llama
- llama-2
- constitucion
- venezuela
- legal
- spanish
- qlora
- peft
- transformers
datasets:
- niuvaroza/constitucion-venezuela-1000
language:
- es
library_name: transformers
pipeline_tag: text-generation
model_creator: Niurka Oropeza
model_name: llama-2-7b-chat-finetune-constitucion-venezuela
base_model: meta-llama/Llama-2-7b-chat-hf
---
# 🧠 Llama 2 7B Chat Fine-tuneado en la Constitución de Venezuela 🇻🇪
Este modelo es una versión fine-tuneada de `meta-llama/Llama-2-7b-chat-hf`, ajustado sobre el dataset [`niuvaroza/constitucion-venezuela-1000`](https://huggingface.co/datasets/niuvaroza/constitucion-venezuela-1000), que contiene 1000 instrucciones curadas relacionadas con el texto de la Constitución de la República Bolivariana de Venezuela.
---
## 🧾 Objetivo del modelo
Está diseñado para asistir en tareas educativas, explicativas y conversacionales sobre artículos constitucionales. Responde preguntas como si fuera un asistente legal educativo, sin reemplazar asesoría jurídica profesional.
---
## ⚙️ Detalles técnicos
- **Modelo base**: `meta-llama/Llama-2-7b-chat-hf`
- **Método**: QLoRA con PEFT (LoRA)
- **Tokenización**: `AutoTokenizer` con padding lateral derecho
- **Batch size efectivo**: 16 (batch 4 × grad. accum. 4)
- **Optimización**: `paged_adamw_8bit`
- **Cuantización**: 4-bit (nf4)
- **Entrenamiento en**: Google Colab con GPU de 15GB
- **Epochs**: 3
- **Formato de prompt**:
```plaintext
<s>[INST] ¿Cuál es el principio de igualdad ante la ley? [/INST]
```
---
## 📚 Dataset utilizado
- [`niuvaroza/constitucion-venezuela-1000`](https://huggingface.co/datasets/niuvaroza/constitucion-venezuela-1000)
- 1000 ejemplos de tipo `instruction`, `input`, `output`
- Curado manualmente por Niurka Oropeza
---
## 📌 Uso de ejemplo
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("niuvaroza/llama-2-7b-chat-finetune-constitucion-venezuela", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("niuvaroza/llama-2-7b-chat-finetune-constitucion-venezuela")
prompt = "<s>[INST] ¿Cuáles son los derechos políticos según la Constitución venezolana? [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## ⚠️ Advertencia Legal
> Este modelo es exclusivamente con fines **educativos e informativos**. No sustituye el criterio profesional de un abogado ni representa una fuente jurídica oficial. Consulta siempre con especialistas en derecho para decisiones legales.
---
## 👩💻 Desarrollado por
- **Autora y Fine-Tuning**: Niurka Oropeza (2025)
- **Licencia**: Apache 2.0
|
mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-8bit | mlx-community | 2025-06-19T20:09:12Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"chat",
"text-generation",
"conversational",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"8-bit",
"region:us"
] | text-generation | 2025-06-19T19:59:43Z | ---
tags:
- chat
- mlx
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-8bit
This model [mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-8bit](https://huggingface.co/mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-8bit) was
converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mihail11/rezultate2 | mihail11 | 2025-06-19T20:05:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T11:52:16Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: rezultate2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rezultate2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0327
- Accuracy: 0.9939
- Precision: 0.9940
- Recall: 0.9939
- F1: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0514 | 1.0 | 2546 | 0.0369 | 0.9925 | 0.9926 | 0.9925 | 0.9926 |
| 0.0419 | 2.0 | 5092 | 0.0350 | 0.9935 | 0.9936 | 0.9935 | 0.9935 |
| 0.034 | 3.0 | 7638 | 0.0348 | 0.9937 | 0.9938 | 0.9937 | 0.9937 |
| 0.0398 | 4.0 | 10184 | 0.0329 | 0.9939 | 0.9940 | 0.9939 | 0.9939 |
| 0.0372 | 5.0 | 12730 | 0.0327 | 0.9939 | 0.9940 | 0.9939 | 0.9939 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cpu
- Datasets 3.0.1
- Tokenizers 0.20.0
|
mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16 | mlx-community | 2025-06-19T19:59:15Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"chat",
"text-generation",
"conversational",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:finetune:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"region:us"
] | text-generation | 2025-06-19T19:42:59Z | ---
tags:
- chat
- mlx
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16
This model [mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16](https://huggingface.co/mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16) was
converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
DS4H-ICTU/linguo_mt_tui_en | DS4H-ICTU | 2025-06-19T19:57:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ROMANCE",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ROMANCE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-19T19:56:41Z | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: linguo_mt_tui_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# linguo_mt_tui_en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6607
- Bleu: 9.6249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8258 | 1.0 | 1556 | 0.7789 | 5.6013 |
| 0.7271 | 2.0 | 3112 | 0.6872 | 10.8514 |
| 0.7356 | 3.0 | 4668 | 0.6607 | 9.6249 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
franac1/mlm_hate | franac1 | 2025-06-19T19:54:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-19T19:53:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VIDEOS-18-Nimra-Mehra-Videos/FULL.VIDEO.Nimra.Mehra.Viral.Video.Tutorial.Official | VIDEOS-18-Nimra-Mehra-Videos | 2025-06-19T19:39:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T19:39:02Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Nimra-Mehra)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Nimra-Mehra)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Nimra-Mehra) |
mihit0/locallore-ml | mihit0 | 2025-06-19T19:37:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-19T19:34:30Z | ---
title: LocalLore ML Models
emoji: 🎯
colorFrom: blue
colorTo: green
sdk: pytorch
pinned: false
---
# LocalLore ML Models
This repository contains the trained PyTorch models for the LocalLore event recommendation system:
## Models
- **event_tagger_best.pth**: DistilBERT-based event categorization model
- **quality_scorer_best.pth**: Multi-task quality scoring and spam detection model
- **event_recommender_best.pth**: Neural collaborative filtering recommendation model
## Usage
These models are automatically downloaded by the LocalLore ML API deployed on Render.
## Model Architecture
- **Event Tagger**: DistilBERT + Multi-label classification
- **Quality Scorer**: DistilBERT + Multi-task learning (quality score + spam detection)
- **Event Recommender**: Neural Collaborative Filtering with Matrix Factorization
## API Integration
The models are used by the LocalLore ML API for:
- Smart event tagging and categorization
- Content quality assessment
- Personalized event recommendations
Repository: https://github.com/mihit0/locallore-ml
|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-28-2025-06-19 | morturr | 2025-06-19T19:35:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T19:34:46Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
akar49/VIT_fourclass_Jun25 | akar49 | 2025-06-19T19:31:17Z | 0 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-19T19:31:04Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: VIT_fourclass_Jun25
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# VIT_fourclass_Jun25
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0938
- Validation Loss: 2.3960
- Train Accuracy: 0.51
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'SGD', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': np.float32(0.01), 'momentum': 0.0, 'nesterov': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.9196 | 1.2060 | 0.38 | 0 |
| 0.3792 | 1.4807 | 0.48 | 1 |
| 0.2729 | 1.7396 | 0.45 | 2 |
| 0.2006 | 2.4379 | 0.29 | 3 |
| 0.1996 | 2.4795 | 0.36 | 4 |
| 0.1734 | 2.7916 | 0.35 | 5 |
| 0.1860 | 4.1270 | 0.09 | 6 |
| 0.1490 | 2.7235 | 0.37 | 7 |
| 0.1077 | 3.5380 | 0.26 | 8 |
| 0.1173 | 2.7697 | 0.42 | 9 |
| 0.1526 | 2.7868 | 0.42 | 10 |
| 0.1161 | 3.1132 | 0.36 | 11 |
| 0.1093 | 3.5738 | 0.33 | 12 |
| 0.0884 | 3.1227 | 0.37 | 13 |
| 0.0938 | 2.3960 | 0.51 | 14 |
### Framework versions
- Transformers 4.52.4
- TensorFlow 2.18.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Subsets and Splits