modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sicholayk/MitolynReviews
|
sicholayk
| 2025-03-01T06:47:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-01T06:46:20Z |
I don't want to discuss your realm this evening if you care in connection with a headache. I'm typically well organized. Get started and do this and also this is one of the closely guarded secrets. That inference will come to a head. Do you want to go back on giving the feeling of being revengeful? This is uncertain, even if it isn't so. I was astounded by that happenstance. This will take the world by storm. I heard that pathetic story with regard to their data. There are lots of mechanisms to achieve this rapidly. How can you discover the euros you need for your Mitolyn? Just take a look at all the cases arising from it. That is the last detail I do before I fall to sleep. If you are planning on doing this then be careful. Whereby do apprentices purchase peerless Mitolyn sessions?
https://www.youtube.com/watch?v=_f6IDMHw9gQ
https://youtu.be/_f6IDMHw9gQ?si=5NpxSgNos4s_kaXR
https://nas.io/mydealsjunction/challenges/mitolyn-reviews-say-goodbye-to-stubborn-fat-with-mitolyn
https://nas.io/mydealsjunction/challenges/mitolyn-scam-worst-mitolyn-pills-scam-of-2025-watch-out
https://tinyurl.com/28r8k54a
https://tinyurl.com/3urtzmfd
https://imgur.com/a/mitolyn-reviews-2025-honest-review-eJ56RSt
https://heylink.me/mitolyngetnow/
https://www.behance.net/mitolynreviews1
https://bento.me/mitolynordernow
https://magic.ly/mitolynordernow
https://solo.to/mitolynordernow
https://taplink.cc/mitolynbuy
https://pastelink.net/obo3v4eo
https://linktr.ee/mitolynordernow
https://beacons.ai/mitolynordernow
https://www.pinterest.com/pin/1101763496378784382/
https://mitolynbenefits.quora.com/
https://www.pinterest.com/mitolyngetnow/
https://soundcloud.com/sarbkmay/mitolyn-scam
https://soundcloud.com/sarbkmay
https://slaps.com/mitolyngetnow
https://mymediads.com/mitolyn-reviews-should-you-try-mitolyn-for-weight-loss/
https://nas.io/mydealsjunction/challenges/mitolyn-scam-worst-mitolyn-pills-scam-of-2025-watch-out
https://sketchfab.com/3d-models/mitolyn-scam-read-consumer-reports-d376c0f3842f4a9f80576c25016493d2
https://sketchfab.com/mitolyngetnow
https://www.pixiv.net/novel/show.php?id=24160414
https://www.businesslistings.net.au/HEALTH/new_york/sicholayk/1108072.aspx
https://www.deviantart.com/sicholayk
https://www.deviantart.com/sicholayk/art/1165352644
https://eodev.com/gorev/30555887
https://www.provenexpert.com/sicholayk/
https://superuser.com/questions/1883798/mitolyn-scam-worst-mitolyn-pills
https://fueler.io/mitolyngetnow
https://tudomuaban.com/chi-tiet-rao-vat/2487605/mitolyn-reviews-read-consumer-reports-.html
https://znanija.com/task/56798682
https://community.netgear.com/t5/WiFi-Range-Extenders-Nighthawk/Is-This-Supplement-Good-For-Losing-Weight/m-p/2440336
https://groups.google.com/g/mitolyn-scam/c/dIZAekXv3SU
https://groups.google.com/g/mitolyn-scam/
https://www.mumsnet.com/talk/am_i_being_unreasonable/5284806-mitolyn-reviews-2025-my-honest-review
https://huggingface.co/sicholayk
https://www.historypin.org/en/mitolyn-reviews-2025-my-honest-review/pin/1199041
https://github.com/mitolyngetnow/Mitolyn-Reviews/
https://github.com/mitolyngetnow/
|
sbhikha/InkubaLM-MT-Swahili-8bit
|
sbhikha
| 2025-03-01T06:40:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T05:54:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dmkhl/GPT
|
dmkhl
| 2025-03-01T06:40:28Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"aa",
"dataset:open-thoughts/OpenThoughts-114k",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:adapter:deepseek-ai/DeepSeek-R1",
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T06:39:24Z |
---
license: apache-2.0
datasets:
- open-thoughts/OpenThoughts-114k
language:
- aa
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1
new_version: deepseek-ai/DeepSeek-R1
library_name: adapter-transformers
---
|
robiulawaldev/93d0b1ac-e7e5-489e-86f9-923924cc3668
|
robiulawaldev
| 2025-03-01T06:37:03Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"region:us"
] | null | 2025-03-01T06:36:54Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/SmolLM2-1.7B-Instruct
model-index:
- name: robiulawaldev/93d0b1ac-e7e5-489e-86f9-923924cc3668
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/93d0b1ac-e7e5-489e-86f9-923924cc3668
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Seongyun/DeepSeek-R1-Distill-Qwen-1.5B-GRPO_pref_repetition_penalty
|
Seongyun
| 2025-03-01T06:36:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T03:54:00Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: DeepSeek-R1-Distill-Qwen-1.5B-GRPO_pref_repetition_penalty
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for DeepSeek-R1-Distill-Qwen-1.5B-GRPO_pref_repetition_penalty
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Seongyun/DeepSeek-R1-Distill-Qwen-1.5B-GRPO_pref_repetition_penalty", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/minjuseo/huggingface/runs/k5vl0gnl)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dykim723/dependencies
|
dykim723
| 2025-03-01T06:36:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T06:34:43Z |
---
license: apache-2.0
---
|
PrunaAI/LinkSoul-Chinese-Llama-2-7b-HQQ-8bit-smashed
|
PrunaAI
| 2025-03-01T06:36:09Z | 9 | 0 | null |
[
"llama",
"pruna-ai",
"hqq",
"region:us"
] | null | 2025-02-24T20:59:44Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/LinkSoul-Chinese-Llama-2-7b-HQQ-8bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/LinkSoul-Chinese-Llama-2-7b-HQQ-8bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Antonin77777/Llama3Ollamamodelunslothtest
|
Antonin77777
| 2025-03-01T06:35:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T06:35:45Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Antonin77777
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ManukyanD/colqwen2.5-clipped9-checkpoint-2000
|
ManukyanD
| 2025-03-01T06:34:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T06:33:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
olegGerbylev/Qwen2.5-0.5b-instruct-VTB
|
olegGerbylev
| 2025-03-01T06:33:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T06:31:42Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: train_2025-03-01-05-12-51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2025-03-01-05-12-51
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the match dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
tsss1/deepsek-qwen1.5-vpn
|
tsss1
| 2025-03-01T06:33:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-01T06:32:55Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tsss1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AHAMED-27/Qwen-Qwen2.5-7B-Instruct-open-assistant-guanaco-2
|
AHAMED-27
| 2025-03-01T06:32:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"optimum_habana",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"region:us"
] | null | 2025-03-01T05:49:50Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: peft
---
# Model Card for Qwen-Qwen2.5-7B-Instruct-open-assistant-guanaco-2
## Model Details
### Model Description
This model is a fine-tuned version of **Qwen2.5-7B**, optimized for **causal language modeling (CAUSAL_LM)** using **LoRA (Low-Rank Adaptation)**. The fine-tuning process was carried out under **Intel Gaudi access** using Habana Gaudi AI processors, leveraging `optimum-habana` for hardware acceleration.
- **Developed by:** AHAMED-27
- **Funded by:** [More Information Needed]
- **Shared by:** AHAMED-27
- **Model type:** Causal Language Model (CAUSAL_LM)
- **Language(s):** English
- **License:** [More Information Needed]
- **Finetuned from model:** [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
### Model Sources
- **Repository:** [AHAMED-27/Qwen-Qwen2.5-7B-Instruct-open-assistant-guanaco-2](https://huggingface.co/AHAMED-27/Qwen-Qwen2.5-7B-Instruct-open-assistant-guanaco-2)
- **Paper:** [More Information Needed]
- **Demo:** [More Information Needed]
## Uses
### Direct Use
This model is designed for natural language generation tasks, such as:
- Text completion
- Conversational AI
- Story generation
- Summarization
### Downstream Use
The model can be fine-tuned further for specific NLP applications such as:
- Chatbots
- Code generation
- Sentiment analysis
- Question answering
### Out-of-Scope Use
- The model is not intended for real-time decision-making applications where accuracy is critical.
- Avoid using it for generating misinformation or harmful content.
## Bias, Risks, and Limitations
### Known Risks
- The model may generate biased or incorrect responses as it is fine-tuned on publicly available datasets.
- It may not perform well on low-resource languages or domain-specific tasks without additional fine-tuning.
### Recommendations
- Users should verify the generated content before deploying it in production.
- Ethical considerations should be taken into account while using this model.
## How to Get Started with the Model
Use the code below to load and generate text using the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("AHAMED-27/Qwen-Qwen2.5-7B-Instruct-open-assistant-guanaco-2")
model = AutoModelForCausalLM.from_pretrained("AHAMED-27/Qwen-Qwen2.5-7B-Instruct-open-assistant-guanaco-2")
input_text = "Explain the benefits of using LoRA for fine-tuning large language models."
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Training Details
### Training Data
The model was fine-tuned on the **timdettmers/openassistant-guanaco** dataset.
### Training Procedure
#### Preprocessing
- Tokenization was performed using the `AutoTokenizer` from the `transformers` library.
- LoRA adaptation was applied to the attention projection layers (`q_proj`, `v_proj`).
#### Training Hyperparameters
- **Training Regime:** BF16 Mixed Precision
- **Epochs:** 3
- **Batch Size:** 16 per device
- **Learning Rate:** 1e-4
- **Optimizer:** Adam
- **Scheduler:** Constant LR
- **LoRA Rank (r):** 8
- **LoRA Alpha:** 16
- **LoRA Dropout:** 0.05
#### Speeds, Sizes, Times
- **Training Runtime:** 1026.98 seconds
- **Training Samples per Second:** 17.471
- **Training Steps per Second:** 1.092
- **Total Available Memory:** 94.62 GB
- **Max Memory Allocated:** 89.17 GB
- **Memory Currently Allocated:** 58.34 GB
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- The model was evaluated on a held-out validation set from the **timdettmers/openassistant-guanaco** dataset.
#### Evaluation Metrics
- **Evaluation Accuracy:** 71.51%
- **Evaluation Loss:** 1.3675
- **Perplexity:** 3.92
- **Evaluation Runtime:** 20.308 seconds
- **Evaluation Samples per Second:** 22.511
- **Evaluation Steps per Second:** 2.882
## Software Dependencies
- **Transformers Version:** 4.38.2
- **Optimum-Habana Version:** 1.24.0
- **Intel Gaudi SynapseAI Toolkit**
## Acknowledgments
This fine-tuning process was completed using **Intel Gaudi hardware**, enabling optimized performance with reduced training time. Special thanks to the **Intel Habana team** for their work on Gaudi AI processors.
For more details, visit [Habana Labs](https://habana.ai/).
|
mclemcrew/Qwen-Audio-Instruct-MixInstruct
|
mclemcrew
| 2025-03-01T06:32:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen2-Audio-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-Audio-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T06:31:38Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-Audio-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen-Audio-Instruct-MixInstruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen-Audio-Instruct-MixInstruct
This model is a fine-tuned version of [Qwen/Qwen2-Audio-7B-Instruct](https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.2976 | 0.2857 | 2 | 7.6225 |
| 7.917 | 0.5714 | 4 | 6.9937 |
| 6.8546 | 0.8571 | 6 | 6.5263 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
bowilleatyou/d0388abf-72b5-4571-af88-21d5e8692e9b
|
bowilleatyou
| 2025-03-01T06:30:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T03:06:37Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cointeleporting/SmolLM2-1.7B-Instruct-thinking-function_calling-V0
|
cointeleporting
| 2025-03-01T06:19:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T05:26:31Z |
---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
library_name: transformers
model_name: SmolLM2-1.7B-Instruct-thinking-function_calling
tags:
- generated_from_trainer
- trl
- sft
license: mit
---
# Model Card for SmolLM2-1.7B-Instruct-thinking-function_calling-V0
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cointeleporting/SmolLM2-1.7B-Instruct-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.47.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tsss1/DeepSeek-r1-qwen1.5
|
tsss1
| 2025-03-01T06:18:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T06:18:33Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tsss1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-1.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MaziyarPanahi/Latxa-Llama-3.1-8B-Instruct-GGUF
|
MaziyarPanahi
| 2025-03-01T06:17:04Z | 0 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:HiTZ/Latxa-Llama-3.1-8B-Instruct",
"base_model:quantized:HiTZ/Latxa-Llama-3.1-8B-Instruct",
"region:us",
"conversational"
] |
text-generation
| 2025-03-01T05:54:48Z |
---
base_model: HiTZ/Latxa-Llama-3.1-8B-Instruct
inference: false
model_creator: HiTZ
model_name: Latxa-Llama-3.1-8B-Instruct-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Latxa-Llama-3.1-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Latxa-Llama-3.1-8B-Instruct-GGUF)
- Model creator: [HiTZ](https://huggingface.co/HiTZ)
- Original model: [HiTZ/Latxa-Llama-3.1-8B-Instruct](https://huggingface.co/HiTZ/Latxa-Llama-3.1-8B-Instruct)
## Description
[MaziyarPanahi/Latxa-Llama-3.1-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Latxa-Llama-3.1-8B-Instruct-GGUF) contains GGUF format model files for [HiTZ/Latxa-Llama-3.1-8B-Instruct](https://huggingface.co/HiTZ/Latxa-Llama-3.1-8B-Instruct).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
YoichiTakenaka/deverta-v3-japanese-large-Trust
|
YoichiTakenaka
| 2025-03-01T06:16:04Z | 7 | 0 | null |
[
"safetensors",
"deberta-v2",
"text-classification",
"japanese",
"license:cc-by-sa-4.0",
"region:us"
] |
text-classification
| 2025-02-21T02:15:26Z |
---
license: cc-by-sa-4.0
license_details: |
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Copyright (c) 2025 Yoichi Takenaka
This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/
This project is based on:
- DeBERTa (https://huggingface.co/microsoft/deberta-v3-large), licensed under the MIT License.
- DeBERTa Japanese Model (https://huggingface.co/globis-university/deberta-v3-japanese-large), licensed under the CC BY-SA 4.0 License.
Any modifications or derivative works must also be distributed under the same CC BY-SA 4.0 License.
tags:
- text-classification
- japanese
model-index:
- name: deverta-v3-japanese-large-Anger
results: []
---
# DeBERTa Emotion Predictor
This package provides a DeBERTa-based model for predicting emotions in Japanese text.
DeBERTa Emotion Predictor は、ファインチューニング済みの DeBERTa モデルを用いて日本語テキストの感情推定を行う Python パッケージです。8 つの感情(Joy, Sadness, Anticipation, Surprise, Anger, Fear, Disgust, Trust)に対するそれぞれのモデルを利用し、各テキストに対する感情の予測ラベルと肯定クラスの確信度を簡単に取得できます。
## Install(インストール)
pip を使います。
```bash
pip install deberta-emotion-predictor
```
## Usage (おためし利用)
```python
from deberta_emotion_predictor import DeBERTaEmotionPredictor
predictor = DeBERTaEmotionPredictor()
result = predictor.predict_emotions("今日はとても嬉しい!")
predictor.show_emotions(result)
```
注)Hugging-face から8種類のDeBERTaをダウンロードするため、初回起動に大変時間がかかります。二回目以降の実行から速くなります。
データフレームも入力できます。
```python
import pandas as pd
from deberta_emotion_predictor import DeBERTaEmotionPredictor
# model_dir は、言語モデルとトークナイザがある場所を指しています
predictor = DeBERTaEmotionPredictor()
# サンプルテキスト(リスト形式)
sample_texts = [
"そうだ 京都、行こう。",
"がんばるひとの、がんばらない時間。",
"わたしらしくをあたらしく",
"ピースはここにある。",
"結婚しなくても幸せになれるこの時代に、私は、あなたと結婚したいのです。",
"これからの地球のために一肌、脱ぎました。",
"自分は、きっと想像以上だ。",
"ハローしあわせ。",
"日本を、1枚で。"
]
res_df = predictor.predict_emotions(sample_texts)
predictor.show_emotions(res_df)
```
なお動作には torch, transformers, pandas が必要です。
```bash
pip install torch
pip install transformers
pip install pandas
```
また、GPUを使用するには、NVIDIA GPUドライバー等のインストールが必要です。
こちらは、他の資料を参照してください。
## 特徴
- **8感情の推定**
各感情ごとにファインチューニング済みのモデルを利用し、テキストの感情推定を行います。
- **柔軟な入力形式**
単一のテキスト、テキストのリスト、または pandas Series を入力として受け付け、結果を DataFrame 形式で返します。
- **効率的な推論**
GPU メモリの使用量を抑えるため、必要なときだけモデルを GPU にロードする設計になっています。
## 使用方法
以下は、パッケージの基本的な使い方の例です:
### テキストの渡し方(リスト)
```python
sample_texts = [
"そうだ 京都、行こう。",
"がんばるひとの、がんばらない時間。"
]
result_df = predictor.predict_emotions(sample_texts)
predictor.show_emotions(result_df)
```
### 単一のテキストの場合
```python
result_single = predictor.predict_emotions("新しい朝が来た。")
print(result_single)
```
### 出力されるデータフレーム
出力されるデータフレームには、各感情の有無をあらわす8つの列、及び各感情の確率値が格納されています。
```python
print(result_df)
```
## ディレクトリ構成
```
deberta_emotion_predictor/
├── README.md # この説明ファイル
├── deberta_emotion_predictor.py # DeBERTaEmotionPredictor クラスの実装
│ └── tokenizer_DeBERTa_v3_large/ #トークナイザー
├── setup.py
├── pyproject.toml
├── README.md
├── LICENSE
└── usage.py
```
## 必要環境
- Python 3.6 以上
- PyTorch
- transformers
- pandas
## License
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Copyright (c) 2025 Yoichi Takenaka
This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/
This project is based on:
- DeBERTa (https://huggingface.co/microsoft/deberta-v3-large), licensed under the MIT License.
- DeBERTa Japanese Model (https://huggingface.co/globis-university/deberta-v3-japanese-large), licensed under the CC BY-SA 4.0 License.
Any modifications or derivative works must also be distributed under the same CC BY-SA 4.0 License.
|
YoichiTakenaka/deverta-v3-japanese-large-Sadness
|
YoichiTakenaka
| 2025-03-01T06:15:40Z | 6 | 0 | null |
[
"safetensors",
"deberta-v2",
"text-classification",
"japanese",
"license:cc-by-sa-4.0",
"region:us"
] |
text-classification
| 2025-02-21T02:13:18Z |
---
license: cc-by-sa-4.0
license_details: |
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Copyright (c) 2025 Yoichi Takenaka
This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/
This project is based on:
- DeBERTa (https://huggingface.co/microsoft/deberta-v3-large), licensed under the MIT License.
- DeBERTa Japanese Model (https://huggingface.co/globis-university/deberta-v3-japanese-large), licensed under the CC BY-SA 4.0 License.
Any modifications or derivative works must also be distributed under the same CC BY-SA 4.0 License.
tags:
- text-classification
- japanese
model-index:
- name: deverta-v3-japanese-large-Anger
results: []
---
# DeBERTa Emotion Predictor
This package provides a DeBERTa-based model for predicting emotions in Japanese text.
DeBERTa Emotion Predictor は、ファインチューニング済みの DeBERTa モデルを用いて日本語テキストの感情推定を行う Python パッケージです。8 つの感情(Joy, Sadness, Anticipation, Surprise, Anger, Fear, Disgust, Trust)に対するそれぞれのモデルを利用し、各テキストに対する感情の予測ラベルと肯定クラスの確信度を簡単に取得できます。
## Install(インストール)
pip を使います。
```bash
pip install deberta-emotion-predictor
```
## Usage (おためし利用)
```python
from deberta_emotion_predictor import DeBERTaEmotionPredictor
predictor = DeBERTaEmotionPredictor()
result = predictor.predict_emotions("今日はとても嬉しい!")
predictor.show_emotions(result)
```
注)Hugging-face から8種類のDeBERTaをダウンロードするため、初回起動に大変時間がかかります。二回目以降の実行から速くなります。
データフレームも入力できます。
```python
import pandas as pd
from deberta_emotion_predictor import DeBERTaEmotionPredictor
# model_dir は、言語モデルとトークナイザがある場所を指しています
predictor = DeBERTaEmotionPredictor()
# サンプルテキスト(リスト形式)
sample_texts = [
"そうだ 京都、行こう。",
"がんばるひとの、がんばらない時間。",
"わたしらしくをあたらしく",
"ピースはここにある。",
"結婚しなくても幸せになれるこの時代に、私は、あなたと結婚したいのです。",
"これからの地球のために一肌、脱ぎました。",
"自分は、きっと想像以上だ。",
"ハローしあわせ。",
"日本を、1枚で。"
]
res_df = predictor.predict_emotions(sample_texts)
predictor.show_emotions(res_df)
```
なお動作には torch, transformers, pandas が必要です。
```bash
pip install torch
pip install transformers
pip install pandas
```
また、GPUを使用するには、NVIDIA GPUドライバー等のインストールが必要です。
こちらは、他の資料を参照してください。
## 特徴
- **8感情の推定**
各感情ごとにファインチューニング済みのモデルを利用し、テキストの感情推定を行います。
- **柔軟な入力形式**
単一のテキスト、テキストのリスト、または pandas Series を入力として受け付け、結果を DataFrame 形式で返します。
- **効率的な推論**
GPU メモリの使用量を抑えるため、必要なときだけモデルを GPU にロードする設計になっています。
## 使用方法
以下は、パッケージの基本的な使い方の例です:
### テキストの渡し方(リスト)
```python
sample_texts = [
"そうだ 京都、行こう。",
"がんばるひとの、がんばらない時間。"
]
result_df = predictor.predict_emotions(sample_texts)
predictor.show_emotions(result_df)
```
### 単一のテキストの場合
```python
result_single = predictor.predict_emotions("新しい朝が来た。")
print(result_single)
```
### 出力されるデータフレーム
出力されるデータフレームには、各感情の有無をあらわす8つの列、及び各感情の確率値が格納されています。
```python
print(result_df)
```
## ディレクトリ構成
```
deberta_emotion_predictor/
├── README.md # この説明ファイル
├── deberta_emotion_predictor.py # DeBERTaEmotionPredictor クラスの実装
│ └── tokenizer_DeBERTa_v3_large/ #トークナイザー
├── setup.py
├── pyproject.toml
├── README.md
├── LICENSE
└── usage.py
```
## 必要環境
- Python 3.6 以上
- PyTorch
- transformers
- pandas
## License
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Copyright (c) 2025 Yoichi Takenaka
This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/
This project is based on:
- DeBERTa (https://huggingface.co/microsoft/deberta-v3-large), licensed under the MIT License.
- DeBERTa Japanese Model (https://huggingface.co/globis-university/deberta-v3-japanese-large), licensed under the CC BY-SA 4.0 License.
Any modifications or derivative works must also be distributed under the same CC BY-SA 4.0 License.
|
YoichiTakenaka/deverta-v3-japanese-large-Disgust
|
YoichiTakenaka
| 2025-03-01T06:14:56Z | 6 | 0 | null |
[
"safetensors",
"deberta-v2",
"text-classification",
"japanese",
"license:cc-by-sa-4.0",
"region:us"
] |
text-classification
| 2025-02-21T02:15:08Z |
---
license: cc-by-sa-4.0
license_details: |
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Copyright (c) 2025 Yoichi Takenaka
This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/
This project is based on:
- DeBERTa (https://huggingface.co/microsoft/deberta-v3-large), licensed under the MIT License.
- DeBERTa Japanese Model (https://huggingface.co/globis-university/deberta-v3-japanese-large), licensed under the CC BY-SA 4.0 License.
Any modifications or derivative works must also be distributed under the same CC BY-SA 4.0 License.
tags:
- text-classification
- japanese
model-index:
- name: deverta-v3-japanese-large-Anger
results: []
---
# DeBERTa Emotion Predictor
This package provides a DeBERTa-based model for predicting emotions in Japanese text.
DeBERTa Emotion Predictor は、ファインチューニング済みの DeBERTa モデルを用いて日本語テキストの感情推定を行う Python パッケージです。8 つの感情(Joy, Sadness, Anticipation, Surprise, Anger, Fear, Disgust, Trust)に対するそれぞれのモデルを利用し、各テキストに対する感情の予測ラベルと肯定クラスの確信度を簡単に取得できます。
## Install(インストール)
pip を使います。
```bash
pip install deberta-emotion-predictor
```
## Usage (おためし利用)
```python
from deberta_emotion_predictor import DeBERTaEmotionPredictor
predictor = DeBERTaEmotionPredictor()
result = predictor.predict_emotions("今日はとても嬉しい!")
predictor.show_emotions(result)
```
注)Hugging-face から8種類のDeBERTaをダウンロードするため、初回起動に大変時間がかかります。二回目以降の実行から速くなります。
データフレームも入力できます。
```python
import pandas as pd
from deberta_emotion_predictor import DeBERTaEmotionPredictor
# model_dir は、言語モデルとトークナイザがある場所を指しています
predictor = DeBERTaEmotionPredictor()
# サンプルテキスト(リスト形式)
sample_texts = [
"そうだ 京都、行こう。",
"がんばるひとの、がんばらない時間。",
"わたしらしくをあたらしく",
"ピースはここにある。",
"結婚しなくても幸せになれるこの時代に、私は、あなたと結婚したいのです。",
"これからの地球のために一肌、脱ぎました。",
"自分は、きっと想像以上だ。",
"ハローしあわせ。",
"日本を、1枚で。"
]
res_df = predictor.predict_emotions(sample_texts)
predictor.show_emotions(res_df)
```
なお動作には torch, transformers, pandas が必要です。
```bash
pip install torch
pip install transformers
pip install pandas
```
また、GPUを使用するには、NVIDIA GPUドライバー等のインストールが必要です。
こちらは、他の資料を参照してください。
## 特徴
- **8感情の推定**
各感情ごとにファインチューニング済みのモデルを利用し、テキストの感情推定を行います。
- **柔軟な入力形式**
単一のテキスト、テキストのリスト、または pandas Series を入力として受け付け、結果を DataFrame 形式で返します。
- **効率的な推論**
GPU メモリの使用量を抑えるため、必要なときだけモデルを GPU にロードする設計になっています。
## 使用方法
以下は、パッケージの基本的な使い方の例です:
### テキストの渡し方(リスト)
```python
sample_texts = [
"そうだ 京都、行こう。",
"がんばるひとの、がんばらない時間。"
]
result_df = predictor.predict_emotions(sample_texts)
predictor.show_emotions(result_df)
```
### 単一のテキストの場合
```python
result_single = predictor.predict_emotions("新しい朝が来た。")
print(result_single)
```
### 出力されるデータフレーム
出力されるデータフレームには、各感情の有無をあらわす8つの列、及び各感情の確率値が格納されています。
```python
print(result_df)
```
## ディレクトリ構成
```
deberta_emotion_predictor/
├── README.md # この説明ファイル
├── deberta_emotion_predictor.py # DeBERTaEmotionPredictor クラスの実装
│ └── tokenizer_DeBERTa_v3_large/ #トークナイザー
├── setup.py
├── pyproject.toml
├── README.md
├── LICENSE
└── usage.py
```
## 必要環境
- Python 3.6 以上
- PyTorch
- transformers
- pandas
## License
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Copyright (c) 2025 Yoichi Takenaka
This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/
This project is based on:
- DeBERTa (https://huggingface.co/microsoft/deberta-v3-large), licensed under the MIT License.
- DeBERTa Japanese Model (https://huggingface.co/globis-university/deberta-v3-japanese-large), licensed under the CC BY-SA 4.0 License.
Any modifications or derivative works must also be distributed under the same CC BY-SA 4.0 License.
|
kchayanapas/Hoog-dialect-agent
|
kchayanapas
| 2025-03-01T06:13:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:lst-nectec/HoogBERTa",
"base_model:finetune:lst-nectec/HoogBERTa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-03-01T04:35:20Z |
---
library_name: transformers
license: mit
base_model: lst-nectec/HoogBERTa
tags:
- generated_from_trainer
model-index:
- name: Hoog-dialect-agent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hoog-dialect-agent
This model is a fine-tuned version of [lst-nectec/HoogBERTa](https://huggingface.co/lst-nectec/HoogBERTa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1149 | 1.0 | 15421 | 1.7294 |
| 1.732 | 2.0 | 30842 | 1.5678 |
| 1.5818 | 3.0 | 46263 | 1.4763 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
bkale22/him
|
bkale22
| 2025-03-01T06:12:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T06:12:16Z |
---
license: apache-2.0
---
|
PrunaAI/Salesforce-xgen-7b-8k-base-HQQ-8bit-smashed
|
PrunaAI
| 2025-03-01T06:12:04Z | 4 | 0 | null |
[
"llama",
"pruna-ai",
"hqq",
"region:us"
] | null | 2025-02-24T16:52:34Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/Salesforce-xgen-7b-8k-base-HQQ-8bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/Salesforce-xgen-7b-8k-base-HQQ-8bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
akibc123/LLava_pruned_layer_sensitivity_5.4B
|
akibc123
| 2025-03-01T06:09:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-01T06:05:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlexS3957/mralex-lora
|
AlexS3957
| 2025-03-01T06:07:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-01T05:26:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Mralex Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AlexS3957/mralex-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
KiahHong/distilled-bias-bert
|
KiahHong
| 2025-03-01T06:05:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-01T06:04:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeix/TA-SAE
|
jeix
| 2025-03-01T06:04:51Z | 0 | 1 | null |
[
"safetensors",
"license:mit",
"region:us"
] | null | 2025-02-10T13:21:26Z |
---
license: mit
---
# TA-SAE Model Card
This repository contains the trained Temporal-Aware Sparse AutoEncoder (TA-SAE) models for different layers.
## Model Description
TA-SAE is a specialized autoencoder model designed for temporal feature extraction and compression. Each layer model represents a different level of feature abstraction in the network.
## Usage
### Installation
```python
pip install huggingface_hub
```
### Loading Models
#### Download a specific file:
```python
from huggingface_hub import hf_hub_download
# Download specific layer model
file_path = hf_hub_download(
repo_id="jeix/TA-SAE",
filename="PixArt/SAE-Layer0/model.safetensors"
)
```
#### Download all files for a specific layer:
```python
from huggingface_hub import snapshot_download
# Download all files for layer0
local_dir = snapshot_download(
repo_id="jeix/TA-SAE",
repo_type="model",
allow_patterns="PixArt/SAE-Layer0/*"
)
```
#### Download all layers:
```python
local_dir = snapshot_download(
repo_id="jeix/TA-SAE",
repo_type="model",
allow_patterns="PixArt/SAE-Layer*/*"
)
```
### Using Command Line
#### Install CLI tool
```bash
pip install -U huggingface_hub
```
#### Download specific file
```bash
huggingface-cli download jeix/TA-SAE --local-dir ./download --include "PixArt/SAE-Layer0/model.safetensors"
```
## Model Files Description
Each layer directory contains the following files:
- `model.safetensors`: The main model weights
- `optimizer.bin`: Optimizer state
- `scheduler.bin`: Learning rate scheduler state
- `random_states_0.pkl`: Random state information
- `scaler.pt`: Data scaling parameters
<!-- ## License
[Add your license information here]
## Citation
[Add citation information if applicable]
## Contact
[Add your contact information or github profile] -->
|
mshen2/qwen2.5-math-7b-v4-nohcot
|
mshen2
| 2025-03-01T06:04:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T05:53:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rohinm/model_works
|
rohinm
| 2025-03-01T06:03:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T06:01:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
7Dragons/Michelin_2v1
|
7Dragons
| 2025-03-01T06:01:16Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-01T06:00:09Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Isylimanov099/Rysbek
|
Isylimanov099
| 2025-03-01T05:58:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T05:58:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jeongmoon/rag_unambig_single_8B_without_distr
|
Jeongmoon
| 2025-03-01T05:58:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-03-01T05:41:33Z |
---
base_model: "meta-llama/Meta-Llama-3.1-8B-Instruct"
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
jerseyjerry/task-5-microsoft-Phi-3-mini-4k-instruct-20250301
|
jerseyjerry
| 2025-03-01T05:54:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:other",
"region:us"
] | null | 2025-03-01T05:54:31Z |
---
library_name: peft
license: other
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the flock_task5_tranning dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5231 | 2.5 | 10 | 1.6286 |
| 1.4069 | 5.0 | 20 | 1.4773 |
| 1.2518 | 7.5 | 30 | 1.3657 |
| 1.3069 | 10.0 | 40 | 1.2441 |
| 1.0816 | 12.5 | 50 | 1.0924 |
| 1.0063 | 15.0 | 60 | 0.9201 |
| 0.666 | 17.5 | 70 | 0.7236 |
| 0.5723 | 20.0 | 80 | 0.5105 |
| 0.3671 | 22.5 | 90 | 0.3136 |
| 0.2108 | 25.0 | 100 | 0.1737 |
| 0.1203 | 27.5 | 110 | 0.0830 |
| 0.069 | 30.0 | 120 | 0.0397 |
| 0.0233 | 32.5 | 130 | 0.0212 |
| 0.0158 | 35.0 | 140 | 0.0129 |
| 0.0104 | 37.5 | 150 | 0.0093 |
| 0.0081 | 40.0 | 160 | 0.0076 |
| 0.0073 | 42.5 | 170 | 0.0066 |
| 0.0072 | 45.0 | 180 | 0.0060 |
| 0.0062 | 47.5 | 190 | 0.0056 |
| 0.0063 | 50.0 | 200 | 0.0054 |
| 0.0068 | 52.5 | 210 | 0.0053 |
| 0.0064 | 55.0 | 220 | 0.0052 |
| 0.0061 | 57.5 | 230 | 0.0052 |
| 0.0056 | 60.0 | 240 | 0.0052 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Isylimanov099/DeepSeekLawyer-1
|
Isylimanov099
| 2025-03-01T05:54:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T14:21:58Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Isylimanov099
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qing-yao/strict_default_seed-63_1e-3
|
qing-yao
| 2025-03-01T05:52:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:08:39Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: strict_default_seed-63_1e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strict_default_seed-63_1e-3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1796
- Accuracy: 0.4013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 63
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 5.9692 | 0.9999 | 1487 | 4.4075 | 0.2938 |
| 4.3051 | 1.9998 | 2974 | 3.9084 | 0.3325 |
| 3.7034 | 2.9997 | 4461 | 3.6285 | 0.3558 |
| 3.5333 | 3.9997 | 5948 | 3.4659 | 0.3708 |
| 3.31 | 4.9996 | 7435 | 3.3671 | 0.3807 |
| 3.2377 | 5.9995 | 8922 | 3.3051 | 0.3862 |
| 3.1277 | 6.9994 | 10409 | 3.2697 | 0.3903 |
| 3.091 | 8.0 | 11897 | 3.2403 | 0.3931 |
| 3.0274 | 8.9999 | 13384 | 3.2207 | 0.3948 |
| 3.0015 | 9.9998 | 14871 | 3.2077 | 0.3969 |
| 2.9642 | 10.9997 | 16358 | 3.2009 | 0.3975 |
| 2.9446 | 11.9997 | 17845 | 3.1935 | 0.3985 |
| 2.922 | 12.9996 | 19332 | 3.1888 | 0.3992 |
| 2.9046 | 13.9995 | 20819 | 3.1852 | 0.3999 |
| 2.8939 | 14.9994 | 22306 | 3.1799 | 0.4005 |
| 2.8755 | 16.0 | 23794 | 3.1868 | 0.3999 |
| 2.8744 | 16.9999 | 25281 | 3.1763 | 0.4012 |
| 2.8578 | 17.9998 | 26768 | 3.1774 | 0.4013 |
| 2.8626 | 18.9997 | 28255 | 3.1776 | 0.4015 |
| 2.845 | 19.9983 | 29740 | 3.1796 | 0.4013 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.0
|
mradermacher/L3-Stheno-Maid-Blackroot-Grand-HORROR-16.5B-V1.5-STABLE-i1-GGUF
|
mradermacher
| 2025-03-01T05:52:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-01T05:52:22Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/L3-Stheno-Maid-Blackroot-Grand-HORROR-16.5B-V1.5-STABLE
|
sfarrukhm/ppo-LunarLander-v2
|
sfarrukhm
| 2025-03-01T05:51:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-01T05:51:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.11 +/- 21.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JFernandoGRE/bert-ner-colombian-elitenames
|
JFernandoGRE
| 2025-03-01T05:51:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-03-01T05:51:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Isylimanov099/Venera
|
Isylimanov099
| 2025-03-01T05:50:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T05:50:16Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JackyWW/vit-finetuned
|
JackyWW
| 2025-03-01T05:48:28Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-02-24T07:06:24Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-finetuned
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2270
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0026 | 1.0 | 64 | 1.3046 | 0.5125 |
| 0.6945 | 2.0 | 128 | 1.2227 | 0.5437 |
| 0.4462 | 3.0 | 192 | 1.2127 | 0.5563 |
| 0.2831 | 4.0 | 256 | 1.2013 | 0.55 |
| 0.2379 | 5.0 | 320 | 1.2270 | 0.5563 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Jonjew/GlowingGlitchFlux
|
Jonjew
| 2025-03-01T05:47:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-01T05:46:16Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
woman wearing a glowing mad-glwngmrbldppr dress walking through a public
park, smile <lora:glowing-glitch-flux:1>, night
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 763776051'
output:
url: images/20240916_085432_763776051_flux1-dev-fp8.png
- text: >-
woman wearing a glowing mad-glwngmrbldppr scarf, black skirt and white top
in front of a red sports car, city <lora:glowing-glitch-flux:1>, night
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 1283386014'
output:
url: images/20240916_090334_1283386014_flux1-dev-fp8.png
- text: >-
woman wearing a glowing mad-glwngmrbldppr scarf, black skirt and white top
in front of a red sports car, city <lora:glowing-glitch-flux:1>, night
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 1283386013'
output:
url: images/20240916_090250_1283386013_flux1-dev-fp8.png
- text: >-
woman wearing a glowing mad-glwnggltch dress walking through a public park,
smile <lora:glowing-glitch-flux:1>, night
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 2651785672'
output:
url: images/20240916_083814_2651785672_flux1-dev-fp8.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mad-glwnggltch, glowing
license: unknown
---
# Glowing Glitch FLUX &SDXL
<Gallery />
## Model description
FROM https://civitai.com/models/306426/glowing-glitch-flux-andsdxl
Trigger mad-glwnggltch, glowing
strength 0.8-1.0
The LoRA was trained on Flux-Dev.
It might not work with other Flux Versions. If it works, expect it to behave differently then with Flux-Dev.
The showcase images are made with Flux-Dev.
For Flux Dev I recommend the following setting - Lora strength 0.8-1.0, highres fix with denoising 0.3-0.5
If you enjoy my work, consider showing your support with a 👍 or ❤️ on the model or images—it really keeps me motivated!
You can also follow me or buy me a coffee ☕ at: https://ko-fi.com/madcaddie
Usage tips for the LoRA are in the version details
Thanks and have fun!
## Trigger words
You should use `mad-glwnggltch` to trigger the image generation.
You should use `glowing` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/GlowingGlitchFlux/tree/main) them in the Files & versions tab.
|
yahyaabd/allstats-search-base-v1-64-1
|
yahyaabd
| 2025-03-01T05:46:57Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:25580",
"loss:OnlineContrastiveLoss",
"dataset:yahyaabd/query-hard-pos-neg-doc-pairs-statictable",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-01T05:45:53Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:25580
- loss:OnlineContrastiveLoss
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
widget:
- source_sentence: ikhtisar arus kas triwulan 1, 2004 (miliar)
sentences:
- Balita (0-59 Bulan) Menurut Status Gizi, Tahun 1998-2005
- Perbandingan Indeks dan Tingkat Inflasi Desember 2023 Kota-kota di Luar Pulau
Jawa dan Sumatera dengan Nasional (2018=100)
- Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan
dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tengah, 2018-2023
- source_sentence: BaIgaimana gambaran neraca arus dana dUi Indonesia pada kuartal
kedua tahun 2015?
sentences:
- Jumlah Sekolah, Guru, dan Murid Sekolah Menengah Pertama (SMP) di Bawah Kementrian
Pendidikan dan Kebudayaan Menurut Provinsi 2011/2012-2015/2016
- Ringkasan Neraca Arus Dana Triwulan III Tahun 2003 (Miliar Rupiah)
- Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan
dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tenggara, 2018-2023
- source_sentence: Berapa persen pengeluaran orang di kotaa untuk makanan vs non-makanan,
per provinsi, 2018?
sentences:
- Ekspor Tanaman Obat, Aromatik, dan Rempah-Rempah menurut Negara Tujuan Utama,
2012-2023
- Rata-rata Pendapatan Bersih Pekerja Bebas Menurut Provinsi dan Pendidikan Tertinggi
yang Ditamatkan (ribu rupiah), 2017
- IHK dan Rata-rata Upah per Bulan Buruh Industri di Bawah Mandor (Supervisor),
1996-2014 (1996=100)
- source_sentence: Negara-negara asal impor crude oil dan produk turunannya tahun
2002-2023
sentences:
- Persentase Pengeluaran Rata-rata per Kapita Sebulan Menurut Kelompok Barang, Indonesia,
1999, 2002-2023
- Rata-rata Pendapatan Bersih Berusaha Sendiri menurut Provinsi dan Pendidikan yang
Ditamatkan (ribu rupiah), 2016
- Perkembangan Beberapa Agregat Pendapatan dan Pendapatan per Kapita Atas Dasar
Harga Berlaku, 2010-2016
- source_sentence: Arus dana Q3 2006
sentences:
- Posisi Simpanan Berjangka Rupiah pada Bank Umum dan BPR Menurut Golongan Pemilik
(miliar rupiah), 2005-2018
- Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)
- Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok
Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012
datasets:
- yahyaabd/query-hard-pos-neg-doc-pairs-statictable
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: allstats semantic base v1 test
type: allstats-semantic-base-v1_test
metrics:
- type: cosine_accuracy
value: 0.9848926101201311
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7900121212005615
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9764805894020969
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7900121212005615
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9907993099482462
name: Cosine Precision
- type: cosine_recall
value: 0.9625698324022346
name: Cosine Recall
- type: cosine_ap
value: 0.997296170532912
name: Cosine Ap
- type: cosine_mcc
value: 0.965575308214853
name: Cosine Mcc
- task:
type: binary-classification
name: Binary Classification
dataset:
name: allstats semantic base v1 dev
type: allstats-semantic-base-v1_dev
metrics:
- type: cosine_accuracy
value: 0.9830260996532214
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7720456123352051
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9737954353338968
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7720456123352051
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9740698985343855
name: Cosine Precision
- type: cosine_recall
value: 0.9735211267605633
name: Cosine Recall
- type: cosine_ap
value: 0.9942901335165523
name: Cosine Ap
- type: cosine_mcc
value: 0.9612432190234385
name: Cosine Mcc
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-search-base-v1-64-1")
# Run inference
sentences = [
'Arus dana Q3 2006',
'Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)',
'Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Datasets: `allstats-semantic-base-v1_test` and `allstats-semantic-base-v1_dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | allstats-semantic-base-v1_test | allstats-semantic-base-v1_dev |
|:--------------------------|:-------------------------------|:------------------------------|
| cosine_accuracy | 0.9849 | 0.983 |
| cosine_accuracy_threshold | 0.79 | 0.772 |
| cosine_f1 | 0.9765 | 0.9738 |
| cosine_f1_threshold | 0.79 | 0.772 |
| cosine_precision | 0.9908 | 0.9741 |
| cosine_recall | 0.9626 | 0.9735 |
| **cosine_ap** | **0.9973** | **0.9943** |
| cosine_mcc | 0.9656 | 0.9612 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### query-hard-pos-neg-doc-pairs-statictable
* Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f)
* Size: 25,580 training samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 7 tokens</li><li>mean: 20.14 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.9 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>0: ~70.80%</li><li>1: ~29.20%</li></ul> |
* Samples:
| query | doc | label |
|:-------------------------------------------------------------------------|:----------------------------------------------|:---------------|
| <code>Status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> |
| <code>status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> |
| <code>STATUS PEKERJAAN UTAMA PENDUDUK USIA 15+ YANG BEKERJA, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### query-hard-pos-neg-doc-pairs-statictable
* Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f)
* Size: 5,479 evaluation samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 7 tokens</li><li>mean: 20.78 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.28 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>0: ~71.50%</li><li>1: ~28.50%</li></ul> |
* Samples:
| query | doc | label |
|:-----------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Bagaimana perbandingan PNS pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> |
| <code>bagaimana perbandingan pns pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> |
| <code>BAGAIMANA PERBANDINGAN PNS PRIA DAN WANITA DI BERBAGAI GOLONGAN TAHUN 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
- `eval_on_start`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | allstats-semantic-base-v1_test_cosine_ap | allstats-semantic-base-v1_dev_cosine_ap |
|:-------:|:-------:|:-------------:|:---------------:|:----------------------------------------:|:---------------------------------------:|
| -1 | -1 | - | - | 0.9365 | - |
| 0 | 0 | - | 1.3012 | - | 0.9331 |
| 0.05 | 20 | 0.8793 | 0.3369 | - | 0.9868 |
| 0.1 | 40 | 0.3919 | 0.4554 | - | 0.9799 |
| 0.15 | 60 | 0.2398 | 0.2568 | - | 0.9897 |
| 0.2 | 80 | 0.2672 | 0.2341 | - | 0.9917 |
| 0.25 | 100 | 0.1842 | 0.2385 | - | 0.9855 |
| 0.3 | 120 | 0.0857 | 0.2157 | - | 0.9927 |
| 0.35 | 140 | 0.1376 | 0.1655 | - | 0.9932 |
| 0.4 | 160 | 0.0904 | 0.2740 | - | 0.9890 |
| 0.45 | 180 | 0.1708 | 0.3111 | - | 0.9840 |
| 0.5 | 200 | 0.1761 | 0.1739 | - | 0.9939 |
| 0.55 | 220 | 0.0817 | 0.2213 | - | 0.9906 |
| 0.6 | 240 | 0.0567 | 0.1985 | - | 0.9901 |
| 0.65 | 260 | 0.0796 | 0.1560 | - | 0.9907 |
| 0.7 | 280 | 0.0637 | 0.1648 | - | 0.9911 |
| 0.75 | 300 | 0.0206 | 0.1301 | - | 0.9939 |
| 0.8 | 320 | 0.0344 | 0.1378 | - | 0.9939 |
| 0.85 | 340 | 0.0565 | 0.1333 | - | 0.9941 |
| 0.9 | 360 | 0.0064 | 0.1308 | - | 0.9942 |
| 0.95 | 380 | 0.0327 | 0.1316 | - | 0.9943 |
| **1.0** | **400** | **0.0138** | **0.1266** | **-** | **0.9943** |
| -1 | -1 | - | - | 0.9973 | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Jonjew/PatrickNagelStyle
|
Jonjew
| 2025-03-01T05:42:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-01T05:41:16Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "mad-nglstl <lora:patrick-nagel-style-flux:1> Poster, illustration, Flat Colour A monochrome image of a raven-haired woman with a long, flowing mane. Sheâ\x80\x99s posed in profile, her gaze directed upwards. The background is a stark white, creating a strong contrast between the black of her hair and the pale tones of her face and neck. This image evokes a sense of purity and timelessness."
parameters:
negative_prompt: 'Guidance: 1 Steps: 12 Seed: 3583696309'
output:
url: >-
images/00364-flux1DevHyperNF4Flux1DevBNB_flux1DevHyperNF4_3583696309_Euler_1_12_1344x1728.png
- text: "mad-nglstl <lora:patrick-nagel-style-flux:1> Poster, Illustration, Minimalist Design, Flat Colour A brunette woman in a fitted black dress sits on a small yacht, her posture relaxed as she reclines on the plush seating area. Her gaze is directed towards the calm ocean water, and she holds a glass of champagne delicately in one hand. The yachtâ\x80\x99s interior is rendered in soft, muted tones, and the flat colours of the sea and sky create a tranquil, luxurious atmosphere. The scene evokes a sense of leisurely elegance and refined taste."
parameters:
negative_prompt: 'Guidance: 1 Steps: 12 Seed: 3139834208'
output:
url: >-
images/00019-flux1DevHyperNF4Flux1DevBNB_flux1DevHyperNF4_3139834208_Euler_1_12_2048x1152.png
- text: "mad-nglstl <lora:patrick-nagel-style-flux:1> Poster, illustration, Flat Colour, Stylized Graphic A close-up headshot of a red-haired woman with a voluminous hairstyle. Sheâ\x80\x99s gazing over the top of oversized white sunglasses, her lips painted in a deep wine color. Her shoulders are covered in a high-collared black coat that contrasts sharply against the flat, pastel blue background. Beneath the portrait, \"Patrick Nagel Style\" is written in a monospaced retro font, echoing the classic 80s design aesthetic."
parameters:
negative_prompt: 'Guidance: 1 Steps: 12 Seed: 205566818'
output:
url: >-
images/00027-flux1DevHyperNF4Flux1DevBNB_flux1DevHyperNF4_205566818_Euler_1_12_1344x1728.png
- text: "mad-nglstl <lora:patrick-nagel-style-flux:1> Poster, illustration, Flat Colour A portrait of a blonde woman with her hair pulled back in a high ponytail, wearing oversized black sunglasses. Sheâ\x80\x99s dressed in a strapless, white silk top. The background is a stark black, creating a dramatic contrast. Her lips are a deep, bold red, the only vibrant color in the composition. This shot is meant to evoke a sense of cool sophistication and elegance."
parameters:
negative_prompt: 'Guidance: 1 Steps: 12 Seed: 3849281708'
output:
url: >-
images/00336-flux1DevHyperNF4Flux1DevBNB_flux1DevHyperNF4_3849281708_Euler_1_12_1344x1728.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mad-nglstl, illustration
license: unknown
---
# Patrick Nagel Style FLUX
<Gallery />
## Model description
FROM https://civitai.com/models/804710/patrick-nagel-style-flux
Triggers: mad-nglstl, illustration
Strength 0.8-1.0
The LoRA was trained on Flux-Dev.
It might not work with other Flux Versions. If it works, expect it to behave differently then with Flux-Dev.
The showcase images are made with Flux-Dev.
For Flux Dev I recommend the following setting - Lora strength 0.8-1.0, highres fix with denoising 0.25-0.40
thanks to @Mirabilis for the training data and the showcase images. Please check out his profile he makes amazing images.
If you enjoy my work, consider showing your support with a 👍 or ❤️ on the model or images—it really keeps me motivated!
You can also follow me or buy me a coffee ☕ at: https://ko-fi.com/madcaddie
Usage tips for the LoRA are in the version details
Thanks and have fun!
## Trigger words
You should use `mad-nglstl` to trigger the image generation.
You should use `illustration` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/PatrickNagelStyle/tree/main) them in the Files & versions tab.
|
kk-aivio/2f083dbb-cbed-4ff0-a6c9-2f112373f26b
|
kk-aivio
| 2025-03-01T05:41:44Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"region:us"
] | null | 2025-03-01T05:41:32Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: elyza/Llama-3-ELYZA-JP-8B
model-index:
- name: kk-aivio/2f083dbb-cbed-4ff0-a6c9-2f112373f26b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/2f083dbb-cbed-4ff0-a6c9-2f112373f26b
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/healthinsurance_textgen1-i1-GGUF
|
mradermacher
| 2025-03-01T05:39:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:vraman54/healthinsurance_textgen1",
"base_model:quantized:vraman54/healthinsurance_textgen1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-03-01T05:37:22Z |
---
base_model: vraman54/healthinsurance_textgen1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/vraman54/healthinsurance_textgen1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/healthinsurance_textgen1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/healthinsurance_textgen1-i1-GGUF/resolve/main/healthinsurance_textgen1.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/gpt2-mental-health-i1-GGUF
|
mradermacher
| 2025-03-01T05:39:15Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Jyz1331/gpt2-mental-health",
"base_model:quantized:Jyz1331/gpt2-mental-health",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-03-01T05:34:25Z |
---
base_model: Jyz1331/gpt2-mental-health
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Jyz1331/gpt2-mental-health
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gpt2-mental-health-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-mental-health-i1-GGUF/resolve/main/gpt2-mental-health.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
imdatta0/llama_openthoughts_sorted
|
imdatta0
| 2025-03-01T05:38:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T05:36:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
necva/replica-IEPile
|
necva
| 2025-03-01T05:37:52Z | 0 | 0 | null |
[
"safetensors",
"llama",
"en",
"dataset:zjunlp/iepile",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-01T03:53:54Z |
---
license: mit
datasets:
- zjunlp/iepile
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
## Intended use
The model is instruction-tuned on IEpile dataset. The intended use of the model is Information Extraction tasks: Named Entity Recognition (NER), Relation Extraction (RE), and Event Extraction (EE)
|
irishprancer/354565ae-3eb3-43ed-896b-82627f516a80
|
irishprancer
| 2025-03-01T05:36:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T23:51:48Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
irishprancer/68666e6f-b562-4b7f-bce4-811056edb2cf
|
irishprancer
| 2025-03-01T05:36:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T23:52:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/Fireball-R1.1-Llama-3.1-8B-GGUF
|
MaziyarPanahi
| 2025-03-01T05:34:37Z | 0 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:EpistemeAI/Fireball-R1.1-Llama-3.1-8B",
"base_model:quantized:EpistemeAI/Fireball-R1.1-Llama-3.1-8B",
"region:us",
"conversational"
] |
text-generation
| 2025-03-01T05:12:41Z |
---
base_model: EpistemeAI/Fireball-R1.1-Llama-3.1-8B
inference: false
model_creator: EpistemeAI
model_name: Fireball-R1.1-Llama-3.1-8B-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Fireball-R1.1-Llama-3.1-8B-GGUF](https://huggingface.co/MaziyarPanahi/Fireball-R1.1-Llama-3.1-8B-GGUF)
- Model creator: [EpistemeAI](https://huggingface.co/EpistemeAI)
- Original model: [EpistemeAI/Fireball-R1.1-Llama-3.1-8B](https://huggingface.co/EpistemeAI/Fireball-R1.1-Llama-3.1-8B)
## Description
[MaziyarPanahi/Fireball-R1.1-Llama-3.1-8B-GGUF](https://huggingface.co/MaziyarPanahi/Fireball-R1.1-Llama-3.1-8B-GGUF) contains GGUF format model files for [EpistemeAI/Fireball-R1.1-Llama-3.1-8B](https://huggingface.co/EpistemeAI/Fireball-R1.1-Llama-3.1-8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF
|
mradermacher
| 2025-03-01T05:33:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:singhvarun789/Mental_Health_Fine_Tuned_GPT2",
"base_model:quantized:singhvarun789/Mental_Health_Fine_Tuned_GPT2",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T02:42:30Z |
---
base_model: singhvarun789/Mental_Health_Fine_Tuned_GPT2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/singhvarun789/Mental_Health_Fine_Tuned_GPT2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mental_Health_Fine_Tuned_GPT2-GGUF/resolve/main/Mental_Health_Fine_Tuned_GPT2.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Grogros/dmWM-llama-3.2-1B-Instruct-OWTWM-DistillationWM-OWTWM2-wmToken-d4-10percent
|
Grogros
| 2025-03-01T05:33:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:openwebtext",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T02:39:20Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
datasets:
- openwebtext
model-index:
- name: dmWM-llama-3.2-1B-Instruct-OWTWM-DistillationWM-OWTWM2-wmToken-d4-10percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dmWM-llama-3.2-1B-Instruct-OWTWM-DistillationWM-OWTWM2-wmToken-d4-10percent
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the openwebtext dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1.post303
- Datasets 3.2.0
- Tokenizers 0.20.4
|
bowilleatyou/4eac65d4-7da9-45aa-bea7-d941a5d65086
|
bowilleatyou
| 2025-03-01T05:33:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T23:52:31Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF
|
mradermacher
| 2025-03-01T05:32:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"mental-health",
"gpt-2",
"conversational-ai",
"en",
"dataset:custom-dataset",
"dataset:kaggle",
"base_model:TheCarBun/GPT-2-fine-tuned-mental-health",
"base_model:quantized:TheCarBun/GPT-2-fine-tuned-mental-health",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] |
text-generation
| 2025-03-01T05:27:51Z |
---
base_model: TheCarBun/GPT-2-fine-tuned-mental-health
datasets:
- custom-dataset
- kaggle
language: en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation
- transformers
- mental-health
- gpt-2
- conversational-ai
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TheCarBun/GPT-2-fine-tuned-mental-health
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT-2-fine-tuned-mental-health-i1-GGUF/resolve/main/GPT-2-fine-tuned-mental-health.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
robiulawaldev/14f0a47f-740c-4527-9cc9-ad607a9940e8
|
robiulawaldev
| 2025-03-01T05:32:21Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"region:us"
] | null | 2025-03-01T05:32:06Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: elyza/Llama-3-ELYZA-JP-8B
model-index:
- name: robiulawaldev/14f0a47f-740c-4527-9cc9-ad607a9940e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/14f0a47f-740c-4527-9cc9-ad607a9940e8
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Jonjew/ModernMinimalismFlux
|
Jonjew
| 2025-03-01T05:30:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-01T05:29:26Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
mad-mdrnmnmlsm painting of a cyberpunk woman wearing a futuristic kimono in
front of stylized sun, cybernetic implants, paint splashes, outrun, teal and
yellow background <lora:modern-minimalism-flux:1.0> brush_stroke
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 3596382678'
output:
url: images/20241129_182300_3596382678_flux1-dev-fp8-e4m3fn.png
- text: >-
mad-mdrnmnmlsm painting of futuristic clothing, woman sitting on the roof in
a cyberpunk city overlooking a busy<lora:modern-minimalism-flux:1.2>
brush_stroke, red,
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 2241650794'
output:
url: images/20241129_183234_2241650794_flux1-dev-fp8-e4m3fn.png
- text: >-
mad-mdrnmnmlsm painting of woman wearing a futuristic dress, smiling, upper
body, text banner reading "modern minimalism"
<lora:modern-minimalism-flux:1.2> brush_stroke, orange
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 774678587'
output:
url: images/20241129_193512_774678587_flux1-dev-fp8-e4m3fn.png
- text: >-
black and white and red mad-mdrnmnmlsm painting of a woman
<lora:modern-minimalism-flux:1.2> brush_stroke
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 1430858349'
output:
url: images/20241129_181156_1430858349_flux1-dev-fp8-e4m3fn.png
- text: >-
mad-mdrnmnmlsm painting of futuristic clothing, woman sitting on the roof in
a cyberpunk city overlooking a busy<lora:modern-minimalism-flux:1.2>
brush_stroke, red, neon yellow, navy blue, green
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 2688234518'
output:
url: images/20241129_184427_2688234518_flux1-dev-fp8-e4m3fn.png
- text: >-
black and white and green mad-mdrnmnmlsm painting of a
<lora:modern-minimalism-flux:1.2> brush_stroke
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 1470341916'
output:
url: images/20241129_181557_1470341916_flux1-dev-fp8-e4m3fn.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mad-mdrnmnmlsm, painting of, paint splashes, outrun, brush strokes
license: unknown
---
# Modern Minimalism FLUX
<Gallery />
## Model description
FROM https://civitai.com/models/992509/modern-minimalism-flux
Triggers: mad-mdrnmnmlsm, painting of, paint splashes, outrun, brush strokes
Strength 1.2
About this version
The LoRA was trained on Flux-Dev.
It might not work with other Flux Versions. If it works, expect it to behave differently then with Flux-Dev.
The showcase images are made with Flux-Dev.
For Flux Dev I recommend the following setting - Lora strength 1.0-1.4, highres fix with denoising 0.4-0.5
## Trigger words
You should use `mad-mdrnmnmlsm` to trigger the image generation.
You should use `painting of` to trigger the image generation.
You should use `paint splashes` to trigger the image generation.
You should use `outrun` to trigger the image generation.
You should use `brush strokes` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/ModernMinimalismFlux/tree/main) them in the Files & versions tab.
|
quyeticb/nhqcv
|
quyeticb
| 2025-03-01T05:26:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T05:24:37Z |
---
license: apache-2.0
---
|
baby-dev/8d266fdf-99c9-4525-ae46-bdef53461ec8
|
baby-dev
| 2025-03-01T05:25:37Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"region:us"
] | null | 2025-03-01T05:25:23Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
model-index:
- name: baby-dev/8d266fdf-99c9-4525-ae46-bdef53461ec8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/8d266fdf-99c9-4525-ae46-bdef53461ec8
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Jonjew/StencilArtFlux
|
Jonjew
| 2025-03-01T05:24:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-01T05:24:08Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
cyberpunk woman cybernetic implants , flat colors, mad-stncl
<lora:Stencil_Art_FLUX:0.7>, (masterpiece:1.2), best quality
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 2009145261'
output:
url: images/20240818_081643_2009145261_flux1-dev.png
- text: >-
cyberpunk woman cybernetic implants, text "stencil art" , flat colors,
mad-stncl <lora:Stencil_Art_FLUX:0.7>, (masterpiece:1.2), best quality
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 3583174960'
output:
url: images/20240818_082951_3583174960_flux1-dev.png
- text: >-
(vertical text "STENCIL ART by madcaddie":1.2), woman standing in a
futuristic cityscape, colored panels, flat colors, mad-stncl
<lora:Stencil_Art_FLUX:0.5>, (masterpiece:1.2), best quality
parameters:
negative_prompt: 'Guidance: 1 Steps: 12 Seed: 3039115122'
output:
url: images/20240818_085833_3039115122_flux1-dev.png
- text: >-
cyberpunk woman cybernetic implants, flat colors, mad-stncl
<lora:Stencil_Art_FLUX:0.7>, (masterpiece:1.2), best quality
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 162613546'
output:
url: images/20240818_081123_162613546_flux1-dev.png
- text: >-
cyberpunk woman cybernetic implants , flat colors, mad-stncl
<lora:Stencil_Art_FLUX:0.7>, (masterpiece:1.2), best quality
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 3499351274'
output:
url: images/20240818_082053_3499351274_flux1-dev.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mad-stncl, flat colors
license: unknown
---
# Stencil Art FLUX, SDXL & SD1.5
<Gallery />
## Model description
FROM https://civitai.com/models/460648/stencil-art-flux-sdxl-and-sd15
Triggers: mad-stncl, flat colors
Strength 0.5-0.8, 0.7 typical
Hey there,
this time I have a stencil art LoRA for you.
If you enjoy my work, consider showing your support with a 👍 or ❤️ on the model or images—it really keeps me motivated!
You can also follow me or buy me a coffee ☕ at: https://ko-fi.com/madcaddie
Usage tips for the LoRA are in the version details
Thanks and have fun!
## Trigger words
You should use `mad-stncl` to trigger the image generation.
You should use `flat colors` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/StencilArtFlux/tree/main) them in the Files & versions tab.
|
ReadyArt/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF
|
ReadyArt
| 2025-03-01T05:24:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ReadyArt/Forgotten-Abomination-8B-V2.2",
"base_model:quantized:ReadyArt/Forgotten-Abomination-8B-V2.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-01T05:24:17Z |
---
base_model: ReadyArt/Forgotten-Abomination-8B-V2.2
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`ReadyArt/Forgotten-Abomination-8B-V2.2`](https://huggingface.co/ReadyArt/Forgotten-Abomination-8B-V2.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ReadyArt/Forgotten-Abomination-8B-V2.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sleepdeprived3/Forgotten-Abomination-8B-V2.2-Q4_K_M-GGUF --hf-file forgotten-abomination-8b-v2.2-q4_k_m.gguf -c 2048
```
|
bowilleatyou/6aaa8d6a-e7c4-427f-97fd-7ce83d4e6ce3
|
bowilleatyou
| 2025-03-01T05:24:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T23:52:10Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
viaface/via_svit_001
|
viaface
| 2025-03-01T05:23:44Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T05:23:44Z |
---
license: apache-2.0
---
|
Jonjew/NeonCyberPunkCubism
|
Jonjew
| 2025-03-01T05:19:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-01T05:19:10Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
(text banner in the bottom reading "CUBISM" blocky font:1.4)
mad-cbpk-cubism, woman, cyberpunk, teal background, orange outlines,
<lora:neon-cyberpunk-cubism-flux-000009:1.0>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 1222353332'
output:
url: images/20241031_142935_1222353332_flux1-dev-fp8-e4m3fn.png
- text: >-
(text banner "CUBISM" :1.4) mad-cbpk-cubism, woman, cyberpunk, teal
background, orange outlines, <lora:neon-cyberpunk-cubism-flux-000009:1.0>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 4127093335'
output:
url: images/20241031_141813_4127093335_flux1-dev-fp8-e4m3fn.png
- text: >-
mad-cbpk-cubism woman in kimono made of 3d block shapes, yellow moon,
cyberpunk, dynamic pose, (cubism:1.4), painting
<lora:neon-cyberpunk-cubism-flux-000009:1.2>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 753749128'
output:
url: images/20241031_131848_753749128_flux1-dev-fp8-e4m3fn.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mad-cbpk-cubism
license: unknown
---
# Neon Cyberpunk Cubism FLUX & SDXL
<Gallery />
## Model description
FROM https://civitai.com/models/412468/neon-cyberpunk-cubism-flux-and-sdxl
Trigger mad-cbpk-cubism
Hey there,
this time I have another cyberpunk art LoRA for you. I tried to combine the hightech, scifi visuals of cyberpunk with the cubism art style.
The LoRA is trained on cyberpunk themed images in orange and teal coloring. So it's has a natural bias towards this look, but with proper prompting you should be able to easily change the coloring or the theme of the image.
If you enjoy my work, consider showing your support with a 👍 or ❤️ on the model or images—it really keeps me motivated!
You can also follow me or buy me a coffee ☕ at: https://ko-fi.com/madcaddie
Usage tips for the LoRA are in the version details
## Trigger words
You should use `mad-cbpk-cubism` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/NeonCyberPunkCubism/tree/main) them in the Files & versions tab.
|
mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF
|
mradermacher
| 2025-03-01T05:14:41Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:OpenBuddy/openbuddy-r1-67b-v25.1-65k",
"base_model:quantized:OpenBuddy/openbuddy-r1-67b-v25.1-65k",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-28T21:45:38Z |
---
base_model: OpenBuddy/openbuddy-r1-67b-v25.1-65k
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenBuddy/openbuddy-r1-67b-v25.1-65k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ1_S.gguf) | i1-IQ1_S | 14.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ1_M.gguf) | i1-IQ1_M | 16.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ2_S.gguf) | i1-IQ2_S | 21.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ2_M.gguf) | i1-IQ2_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q2_K_S.gguf) | i1-Q2_K_S | 23.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q2_K.gguf) | i1-Q2_K | 25.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 29.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ3_S.gguf) | i1-IQ3_S | 29.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ3_M.gguf) | i1-IQ3_M | 30.6 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 32.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 35.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q4_0.gguf) | i1-Q4_0 | 38.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 38.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 40.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q4_1.gguf) | i1-Q4_1 | 42.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 46.6 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 47.8 | |
| [PART 1](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/openbuddy-r1-67b-v25.1-65k-i1-GGUF/resolve/main/openbuddy-r1-67b-v25.1-65k.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 55.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AMindToThink/GEMMA-2-2B-FT-ORPO-ISAERFT_gemma-2-2b-lr1.9e-05-beta0.15-20250301-0449
|
AMindToThink
| 2025-03-01T05:14:22Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"smol-course",
"module_1",
"isaerft",
"lr_1.9369302408016977e-05",
"beta_0.15",
"arxiv:2403.07691",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T05:14:20Z |
---
base_model: google/gemma-2-2b
library_name: transformers
model_name: GEMMA-2-2B-FT-ORPO-ISAERFT_gemma-2-2b-lr1.9e-05-beta0.15-20250301-0449
tags:
- generated_from_trainer
- smol-course
- module_1
- isaerft
- lr_1.9369302408016977e-05
- beta_0.15
licence: license
---
# Model Card for GEMMA-2-2B-FT-ORPO-ISAERFT_gemma-2-2b-lr1.9e-05-beta0.15-20250301-0449
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AMindToThink/GEMMA-2-2B-FT-ORPO-ISAERFT_gemma-2-2b-lr1.9e-05-beta0.15-20250301-0449", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/matthewkhoriaty-northwestern-university/orpo-isaerft-sweep/runs/ixhw8kjz)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 2.21.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
darkc0de/BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF
|
darkc0de
| 2025-03-01T05:13:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:darkc0de/BuddyGlassIsBonziBuddyUncensored",
"base_model:quantized:darkc0de/BuddyGlassIsBonziBuddyUncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-01T05:12:30Z |
---
base_model: darkc0de/BuddyGlassIsBonziBuddyUncensored
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# darkc0de/BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF
This model was converted to GGUF format from [`darkc0de/BuddyGlassIsBonziBuddyUncensored`](https://huggingface.co/darkc0de/BuddyGlassIsBonziBuddyUncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/darkc0de/BuddyGlassIsBonziBuddyUncensored) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo darkc0de/BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF --hf-file buddyglassisbonzibuddyuncensored-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo darkc0de/BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF --hf-file buddyglassisbonzibuddyuncensored-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo darkc0de/BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF --hf-file buddyglassisbonzibuddyuncensored-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo darkc0de/BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF --hf-file buddyglassisbonzibuddyuncensored-q5_k_m.gguf -c 2048
```
|
Flytoanything/model
|
Flytoanything
| 2025-03-01T05:07:56Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T04:39:57Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Flytoanything
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
guli111/11
|
guli111
| 2025-03-01T05:07:00Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2025-03-01T04:56:27Z |
Helsinki-NLP/opus-mt-zh-en
metrics:
- bleu
base_model:
- perplexity-ai/r1-1776
new_version: perplexity-ai/r1-1776
pipeline_tag: translation
library_name: asteroid
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nexesenex/Llama_3.1_8b_Dolerstormed_V1.04
|
Nexesenex
| 2025-03-01T05:05:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Nexesenex/Llama_3.1_8b_Dolermed_R1_V1.03",
"base_model:merge:Nexesenex/Llama_3.1_8b_Dolermed_R1_V1.03",
"base_model:Nexesenex/Llama_3.1_8b_Hermedash_R1_V1.04",
"base_model:merge:Nexesenex/Llama_3.1_8b_Hermedash_R1_V1.04",
"base_model:Nexesenex/Llama_3.1_8b_Stormeder_v1.04",
"base_model:merge:Nexesenex/Llama_3.1_8b_Stormeder_v1.04",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-28T18:47:55Z |
---
base_model:
- Nexesenex/Llama_3.1_8b_Hermedash_R1_V1.04
- Nexesenex/Llama_3.1_8b_Dolermed_R1_V1.03
- Nexesenex/Llama_3.1_8b_Stormeder_v1.04
library_name: transformers
tags:
- mergekit
- merge
license: llama3.1
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Nexesenex/Llama_3.1_8b_Dolermed_R1_V1.03](https://huggingface.co/Nexesenex/Llama_3.1_8b_Dolermed_R1_V1.03) as a base.
### Models Merged
The following models were included in the merge:
* [Nexesenex/Llama_3.1_8b_Hermedash_R1_V1.04](https://huggingface.co/Nexesenex/Llama_3.1_8b_Hermedash_R1_V1.04)
* [Nexesenex/Llama_3.1_8b_Stormeder_v1.04](https://huggingface.co/Nexesenex/Llama_3.1_8b_Stormeder_v1.04)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: Nexesenex/Llama_3.1_8b_Stormeder_v1.04
parameters:
weight: 1.0
- model: Nexesenex/Llama_3.1_8b_Hermedash_R1_V1.04
parameters:
weight: 1.0
base_model: Nexesenex/Llama_3.1_8b_Dolermed_R1_V1.03
dtype: bfloat16
normalize: true
chat_template: auto
tokenizer:
source: union
```
|
Kaze-droid/politicalBiasDistilBert
|
Kaze-droid
| 2025-03-01T05:04:53Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T05:04:53Z |
---
license: apache-2.0
---
|
robiulawaldev/758c31a0-f3c8-433d-9a8f-82c05f8afe75
|
robiulawaldev
| 2025-03-01T05:01:32Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b",
"base_model:adapter:unsloth/codegemma-7b",
"region:us"
] | null | 2025-03-01T05:01:15Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/codegemma-7b
model-index:
- name: robiulawaldev/758c31a0-f3c8-433d-9a8f-82c05f8afe75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/758c31a0-f3c8-433d-9a8f-82c05f8afe75
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
noreff/flux-volod-manual-captions
|
noreff
| 2025-03-01T04:59:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-01T04:59:47Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Volod
---
# Flux Volod Manual Captions
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Volod` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('noreff/flux-volod-manual-captions', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Lettria/grag-go-idf-contrastive_10-trial-9
|
Lettria
| 2025-03-01T04:58:42Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2939",
"loss:ContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-base",
"base_model:quantized:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-01T04:57:36Z |
---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2939
- loss:ContrastiveLoss
widget:
- source_sentence: 'Type de project: Les projets qui s''inscrivent dans les stratégies
des établissements supérieurs et qui répondent aux priorités énoncées dans le
SRESRI 2023-2028 : Améliorer les conditions de vie, d''études et de formationPermettre
aux jeunes et aux professionnels d''accéder aux meilleures formationsFaciliter
le déploiement de services et équipementsSoutenir les transformations pédagogiques
de formation pour répondre aux enjeux sociétaux, économiques et environnementaux
Les candidats doivent également répondre à au moins l''une des deux thématiques
suivantes : Projets d''innovation dans les usages numériques'
sentences:
- '''Association'':entité|EST|''Bénéficiaires'':__inferred__'
- '''mentor d''entreprise'':personne|JOUE_RÔLE|''passeur social'':rôle'
- '''Date de début'':concept|EST|''non précisée'':__inferred__'
- source_sentence: 'Type de project: Les thématiques abordées, au titre du programme,
comprennent la santé numérique et les risques de dépendance, la protection des
données personnelles et la prévention des situations de harcèlement et de cyberharcèlement
; les interventions questionnent aussi les aspects numériques de la vie affective
et sexuelle et son corollaire de risques tels que le "sexting", le "Revenge porn",
le chantage sexuel et l''impact de la pornographie sur les jeunes. A la demande
des établissements, des focus thématiques peuvent être réalisés sur d''autres
sujets comme la prévention des phénomènes de prostitution des mineurs, les problématiques
liées aux jeux d''argent et de hasard en ligne ou encore la lutte contre la désinformation
à travers une approche d''éducation aux médias et à l''information. Les établissements
bénéficiaires peuvent choisir jusqu''à deux thématiques qu''ils identifient comme
prioritaires.'
sentences:
- '''Appel à projets'':événement|VOTER|''Commission permanente'':organisation'
- '''Région'':organisation|soutient|''structures privées'':organisation'
- '''petites entreprises innovantes franciliennes'':bénéficiaire|INCLUT|''Professionnel
- Créateur d''entreprise'':bénéficiaire'
- source_sentence: 'Procédures et démarches: Les éventuelles manifestations d’intérêt
concurrentes devront obligatoirement comporter les éléments de nature à en assurer
le sérieux et notamment les documents suivants : un courrier de présentation et
de candidature du candidat ;une présentation du projet qu’il entend réaliser (5
à 6 pages maximum hors annexes), répondant aux activités et contraintes décrites
dans le présent document et comprenant, a minima :une description de l’offre technique,
des grilles tarifaires, de la clientèle cible, des modalités d’exploitation envisagées,un
compte de résultat prévisionnel détaillant'
sentences:
- '''projet'':concept|COMPREND|''grilles tarifaires'':concept'
- '''dispositif de soutien'':programme|ASSOCIÉ|''Culture : Musique'':thème'
- '''plateforme mesdemarches.iledefrance.fr'':plateforme|BÉNÉFICIAIRE|''EPCI'':entité'
- source_sentence: 'Date de début: Lundi 2 Septembre 2024, à 00:00:00 (UTC+0200)
Date de fin (clôture): Vendredi 31 Janvier 2025, à 00:00:00 (UTC+0100)
Date de début de la future campagne: Lundi 2 Septembre 2024, à 00:00:00 (UTC+0200)'
sentences:
- '''Début de la future campagne'':événement|a pour période (Properties={''startDate'':
''2024-09-02T00:00:00+02:00''})|''Clôture de la campagne'':événement'
- '''Sociétés de production'':organisation|ENREGISTRÉ|''FDSI Audiovisuel'':programme'
- '''Date de fin'':concept|EST|''non précisée'':__inferred__'
- source_sentence: 'Procédures et démarches: La demande est à effectuer en ligne sur
la plateforme mesdemarches.iledefrance.frLes dates limites de dépôt sont : avant
le 1er décembre, le 1er février, le 1er juin ou le 16 août 2024 (pour une réponse
fin novembre).
Bénéficiaires: Association - Fondation, Association - Régie par la loi de 1901,
Association - ONG, Collectivité ou institution - Communes de 10 000 à 20 000 hab,
Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution
- Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab,
Collectivité ou institution - Département, Collectivité ou institution - EPCI,
Collectivité ou institution - EPT / Métropole du Grand Paris, Collectivité ou
institution - Bailleurs sociaux, Collectivité ou institution - Autre (GIP, copropriété,
EPA...), Collectivité ou institution - Office de tourisme intercommunal
Précision sure les bénéficiaires: nan'
sentences:
- '''plateforme mesdemarches.iledefrance.fr'':plateforme|BÉNÉFICIAIRE|''Association
- Fondation'':entité'
- '''mentorat'':concept|IMPLIQUE|''salariés d''entreprises'':groupe'
- '''actions'':concept|VALORISE|''maisons d''artistes'':lieu'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: BinaryClassifEval
type: BinaryClassifEval
metrics:
- type: cosine_accuracy
value: 0.8212719298245614
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.855517566204071
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.8739365815931941
name: Cosine F1
- type: cosine_f1_threshold
value: 0.855517566204071
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8345642540620384
name: Cosine Precision
- type: cosine_recall
value: 0.9172077922077922
name: Cosine Recall
- type: cosine_ap
value: 0.9473007930852937
name: Cosine Ap
- type: cosine_mcc
value: 0.5768451443521337
name: Cosine Mcc
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision 835193815a3936a24a0ee7dc9e3d48c1fbb19c55 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lettria/grag-go-idf-contrastive_10-trial-9")
# Run inference
sentences = [
'Procédures et démarches: La demande est à effectuer en ligne sur la plateforme mesdemarches.iledefrance.frLes dates limites de dépôt sont : avant le 1er décembre, le 1er février, le 1er juin ou le 16 août 2024 (pour une réponse fin novembre).\nBénéficiaires: Association - Fondation, Association - Régie par la loi de 1901, Association - ONG, Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département, Collectivité ou institution - EPCI, Collectivité ou institution - EPT / Métropole du Grand Paris, Collectivité ou institution - Bailleurs sociaux, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Office de tourisme intercommunal\nPrécision sure les bénéficiaires: nan',
"'plateforme mesdemarches.iledefrance.fr':plateforme|BÉNÉFICIAIRE|'Association - Fondation':entité",
"'mentorat':concept|IMPLIQUE|'salariés d'entreprises':groupe",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `BinaryClassifEval`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:-----------|
| cosine_accuracy | 0.8213 |
| cosine_accuracy_threshold | 0.8555 |
| cosine_f1 | 0.8739 |
| cosine_f1_threshold | 0.8555 |
| cosine_precision | 0.8346 |
| cosine_recall | 0.9172 |
| **cosine_ap** | **0.9473** |
| cosine_mcc | 0.5768 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 2,939 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 26 tokens</li><li>mean: 191.64 tokens</li><li>max: 429 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.2 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Type de project: L’excès de précipitations tout au long de l’année a conduit à une chute spectaculaire des rendements des céréales d’été et des protéagineux (blé, orge, pois, féverole, etc.) que produisent 90% des agriculteurs d’Île-de-France, historique grenier à blé du pays. Tributaires naturels du fleurissement des cultures, les apiculteurs professionnels de la région ont également souffert de ces dérèglements climatiques.La Région accompagne les exploitations concernées en leur apportant une aide exceptionnelle.</code> | <code>'excès de précipitations':phénomène|DIMINUE|'rendements des protéagineux':concept</code> | <code>1</code> |
| <code>Type de project: Dans le cadre de sa stratégie « Impact 2028 », la Région s’engage dans la défense de la souveraineté industrielle en renforçant son soutien à une industrie circulaire et décarbonée, porteuse d’innovations et créatrice d’emplois. PM'up Jeunes pousses industrielles soutient les projets d’implantation d’une première usine tournée vers la décarbonation, l’efficacité énergétique et la circularité des processus de production. Ces projets peuvent prendre l'une de ces formes : Une première unité de production industrielle, après une phase de prototypage,Une ligne pilote de production industrielle, en interne ou chez un tiers situé en Île-de-France, à condition que sa production soit destinée à de premières commercialisations,La transformation d’une unité de production pilote à une unité de production industrielle</code> | <code>'Région Île-de-France':organisation|soutient|'industrie décarbonée':concept</code> | <code>1</code> |
| <code>Procédures et démarches: Le dépôt des demandes de subvention se fait en ligne sur la plateforme régionale mesdemarches.iledefrance.fr : Session de dépôt unique pour les nouvelles demandes : du 30 septembre au 4 novembre 2024 (11 heures) pour des festivals qui se déroulent entre le 1er mars 2025 et le 28 février 2026 (vote à la CP de mars 2025). Pour les demandes de renouvellement, un mail est envoyé aux structures concernées par le service du Spectacle vivant en amont de chaque session de dépôt.<br>Bénéficiaires: Professionnel - Culture, Association - Fondation, Association - Régie par la loi de 1901, Association - ONG, Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département, Collectivité ou institution - EPC...</code> | <code>'Collectivité ou institution - EPCI':bénéficiaire|PEUT_BÉNÉFICIER|'demandes de subvention':procédure</code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 912 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 912 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 24 tokens</li><li>mean: 175.73 tokens</li><li>max: 394 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 30.53 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>0: ~32.46%</li><li>1: ~67.54%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------|
| <code>Type de project: Le programme propose des rencontres le samedi après-midi dans une université ou une grande école réputée, entre les professionnels bénévoles et les lycéens et collégiens sous la forme d'atelier thématiques. Ces moments de rencontre touchent à une grande multitude de domaines d’activités. L'objectif est de donner l’opportunité aux jeunes les plus enclavés d’échanger avec des intervenants professionnels aux parcours atypiques et inspirants. Les intervenants suscitent les ambitions et élargissent les perspectives des élèves.</code> | <code>'rencontres':événement|impliquent|'professionnels bénévoles':groupe</code> | <code>1</code> |
| <code>Précision sure les bénéficiaires: Communes,Établissements publics de coopération intercommunale (avec ou sans fiscalité propre),Établissements publics territoriaux franciliens,Départements,Aménageurs publics et privés (lorsque ces derniers interviennent à la demande ou pour le compte d'une collectivité précitée).</code> | <code>'Aménageurs privés':entité|INTERVIENT_POUR|'Départements':entité</code> | <code>1</code> |
| <code>Date de début: non précisée<br>Date de fin (clôture): non précisée<br>Date de début de la future campagne: non précisée</code> | <code>'Date de fin':concept|EST|'non précisée':__inferred__</code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 2
- `learning_rate`: 6.880743377052856e-05
- `num_train_epochs`: 20
- `lr_scheduler_type`: cosine
- `warmup_steps`: 332
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `hub_model_id`: Lettria/grag-go-idf-contrastive_10-trial-9
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 6.880743377052856e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 332
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: Lettria/grag-go-idf-contrastive_10-trial-9
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | BinaryClassifEval_cosine_ap |
|:-------:|:-------:|:-------------:|:---------------:|:---------------------------:|
| 0.1361 | 50 | 0.0247 | - | - |
| 0.2721 | 100 | 0.0166 | - | - |
| 0.4082 | 150 | 0.012 | - | - |
| 0.5442 | 200 | 0.0101 | - | - |
| 0.6803 | 250 | 0.01 | - | - |
| 0.8163 | 300 | 0.0066 | - | - |
| 0.9524 | 350 | 0.0054 | - | - |
| **1.0** | **368** | **-** | **0.0226** | **0.9473** |
| 1.0871 | 400 | 0.0056 | - | - |
| 1.2231 | 450 | 0.0043 | - | - |
| 1.3592 | 500 | 0.0026 | - | - |
| 1.4952 | 550 | 0.0043 | - | - |
| 1.6313 | 600 | 0.0046 | - | - |
| 1.7673 | 650 | 0.0044 | - | - |
| 1.9034 | 700 | 0.0038 | - | - |
| 2.0 | 736 | - | 0.0337 | 0.9493 |
| 2.0381 | 750 | 0.0035 | - | - |
| 2.1741 | 800 | 0.0023 | - | - |
| 2.3102 | 850 | 0.0018 | - | - |
| 2.4463 | 900 | 0.001 | - | - |
| 2.5823 | 950 | 0.0019 | - | - |
| 2.7184 | 1000 | 0.0023 | - | - |
| 2.8544 | 1050 | 0.0026 | - | - |
| 2.9905 | 1100 | 0.002 | - | - |
| 3.0 | 1104 | - | 0.0269 | 0.9492 |
| 3.1252 | 1150 | 0.0019 | - | - |
| 3.2612 | 1200 | 0.0016 | - | - |
| 3.3973 | 1250 | 0.001 | - | - |
| 3.5333 | 1300 | 0.0011 | - | - |
| 3.6694 | 1350 | 0.0014 | - | - |
| 3.8054 | 1400 | 0.0012 | - | - |
| 3.9415 | 1450 | 0.0011 | - | - |
| 4.0 | 1472 | - | 0.0313 | 0.9417 |
| 4.0762 | 1500 | 0.0013 | - | - |
| 4.2122 | 1550 | 0.0016 | - | - |
| 4.3483 | 1600 | 0.0013 | - | - |
| 4.4844 | 1650 | 0.0008 | - | - |
| 4.6204 | 1700 | 0.0004 | - | - |
| 4.7565 | 1750 | 0.0007 | - | - |
| 4.8925 | 1800 | 0.0009 | - | - |
| 5.0 | 1840 | - | 0.0307 | 0.9371 |
| 5.0272 | 1850 | 0.0003 | - | - |
| 5.1633 | 1900 | 0.0005 | - | - |
| 5.2993 | 1950 | 0.0006 | - | - |
| 5.4354 | 2000 | 0.0004 | - | - |
| 5.5714 | 2050 | 0.0002 | - | - |
| 5.7075 | 2100 | 0.0004 | - | - |
| 5.8435 | 2150 | 0.0006 | - | - |
| 5.9796 | 2200 | 0.0003 | - | - |
| 6.0 | 2208 | - | 0.0283 | 0.9435 |
| 6.1143 | 2250 | 0.0003 | - | - |
| 6.2503 | 2300 | 0.0004 | - | - |
| 6.3864 | 2350 | 0.0001 | - | - |
| 6.5224 | 2400 | 0.0003 | - | - |
| 6.6585 | 2450 | 0.0002 | - | - |
| 6.7946 | 2500 | 0.0002 | - | - |
| 6.9306 | 2550 | 0.0003 | - | - |
| 7.0 | 2576 | - | 0.0249 | 0.9472 |
| 7.0653 | 2600 | 0.0004 | - | - |
| 7.2014 | 2650 | 0.0003 | - | - |
| 7.3374 | 2700 | 0.0004 | - | - |
| 7.4735 | 2750 | 0.0006 | - | - |
| 7.6095 | 2800 | 0.0002 | - | - |
| 7.7456 | 2850 | 0.0002 | - | - |
| 7.8816 | 2900 | 0.0003 | - | - |
| 8.0 | 2944 | - | 0.0314 | 0.9189 |
| 8.0163 | 2950 | 0.0002 | - | - |
| 8.1524 | 3000 | 0.0003 | - | - |
| 8.2884 | 3050 | 0.0003 | - | - |
| 8.4245 | 3100 | 0.0003 | - | - |
| 8.5605 | 3150 | 0.0006 | - | - |
| 8.6966 | 3200 | 0.0014 | - | - |
| 8.8327 | 3250 | 0.0009 | - | - |
| 8.9687 | 3300 | 0.0006 | - | - |
| 9.0 | 3312 | - | 0.0313 | 0.9208 |
| 9.1034 | 3350 | 0.0003 | - | - |
| 9.2395 | 3400 | 0.0007 | - | - |
| 9.3755 | 3450 | 0.0005 | - | - |
| 9.5116 | 3500 | 0.0003 | - | - |
| 9.6476 | 3550 | 0.0002 | - | - |
| 9.7837 | 3600 | 0.0006 | - | - |
| 9.9197 | 3650 | 0.0003 | - | - |
| 10.0 | 3680 | - | 0.0305 | 0.9282 |
| 10.0544 | 3700 | 0.0003 | - | - |
| 10.1905 | 3750 | 0.0002 | - | - |
| 10.3265 | 3800 | 0.0002 | - | - |
| 10.4626 | 3850 | 0.0001 | - | - |
| 10.5986 | 3900 | 0.0002 | - | - |
| 10.7347 | 3950 | 0.0001 | - | - |
| 10.8707 | 4000 | 0.0002 | - | - |
| 11.0 | 4048 | - | 0.0330 | 0.9229 |
| 11.0054 | 4050 | 0.0003 | - | - |
| 11.1415 | 4100 | 0.0001 | - | - |
| 11.2776 | 4150 | 0.0001 | - | - |
| 11.4136 | 4200 | 0.0001 | - | - |
| 11.5497 | 4250 | 0.0001 | - | - |
| 11.6857 | 4300 | 0.0001 | - | - |
| 11.8218 | 4350 | 0.0001 | - | - |
| 11.9578 | 4400 | 0.0001 | - | - |
| 12.0 | 4416 | - | 0.0315 | 0.9326 |
| 12.0925 | 4450 | 0.0001 | - | - |
| 12.2286 | 4500 | 0.0001 | - | - |
| 12.3646 | 4550 | 0.0 | - | - |
| 12.5007 | 4600 | 0.0002 | - | - |
| 12.6367 | 4650 | 0.0001 | - | - |
| 12.7728 | 4700 | 0.0001 | - | - |
| 12.9088 | 4750 | 0.0001 | - | - |
| 13.0 | 4784 | - | 0.0320 | 0.9254 |
| 13.0435 | 4800 | 0.0001 | - | - |
| 13.1796 | 4850 | 0.0001 | - | - |
| 13.3156 | 4900 | 0.0 | - | - |
| 13.4517 | 4950 | 0.0 | - | - |
| 13.5878 | 5000 | 0.0001 | - | - |
| 13.7238 | 5050 | 0.0001 | - | - |
| 13.8599 | 5100 | 0.0 | - | - |
| 13.9959 | 5150 | 0.0001 | - | - |
| 14.0 | 5152 | - | 0.0312 | 0.9331 |
| 14.1306 | 5200 | 0.0 | - | - |
| 14.2667 | 5250 | 0.0 | - | - |
| 14.4027 | 5300 | 0.0001 | - | - |
| 14.5388 | 5350 | 0.0 | - | - |
| 14.6748 | 5400 | 0.0001 | - | - |
| 14.8109 | 5450 | 0.0 | - | - |
| 14.9469 | 5500 | 0.0 | - | - |
| 15.0 | 5520 | - | 0.0313 | 0.9325 |
| 15.0816 | 5550 | 0.0 | - | - |
| 15.2177 | 5600 | 0.0 | - | - |
| 15.3537 | 5650 | 0.0 | - | - |
| 15.4898 | 5700 | 0.0001 | - | - |
| 15.6259 | 5750 | 0.0001 | - | - |
| 15.7619 | 5800 | 0.0001 | - | - |
| 15.8980 | 5850 | 0.0 | - | - |
| 16.0 | 5888 | - | 0.0313 | 0.9318 |
| 16.0327 | 5900 | 0.0 | - | - |
| 16.1687 | 5950 | 0.0 | - | - |
| 16.3048 | 6000 | 0.0 | - | - |
| 16.4408 | 6050 | 0.0 | - | - |
| 16.5769 | 6100 | 0.0 | - | - |
| 16.7129 | 6150 | 0.0001 | - | - |
| 16.8490 | 6200 | 0.0 | - | - |
| 16.9850 | 6250 | 0.0 | - | - |
| 17.0 | 6256 | - | 0.0311 | 0.9333 |
| 17.1197 | 6300 | 0.0 | - | - |
| 17.2558 | 6350 | 0.0 | - | - |
| 17.3918 | 6400 | 0.0 | - | - |
| 17.5279 | 6450 | 0.0 | - | - |
| 17.6639 | 6500 | 0.0001 | - | - |
| 17.8 | 6550 | 0.0 | - | - |
| 17.9361 | 6600 | 0.0 | - | - |
| 18.0 | 6624 | - | 0.0313 | 0.9324 |
| 18.0707 | 6650 | 0.0 | - | - |
| 18.2068 | 6700 | 0.0 | - | - |
| 18.3429 | 6750 | 0.0 | - | - |
| 18.4789 | 6800 | 0.0 | - | - |
| 18.6150 | 6850 | 0.0 | - | - |
| 18.7510 | 6900 | 0.0 | - | - |
| 18.8871 | 6950 | 0.0 | - | - |
| 19.0 | 6992 | - | 0.0313 | 0.9327 |
| 19.0218 | 7000 | 0.0 | - | - |
| 19.1578 | 7050 | 0.0 | - | - |
| 19.2939 | 7100 | 0.0 | - | - |
| 19.4299 | 7150 | 0.0 | - | - |
| 19.5660 | 7200 | 0.0 | - | - |
| 19.7020 | 7250 | 0.0 | - | - |
| 19.8381 | 7300 | 0.0 | - | - |
| 19.9469 | 7340 | - | 0.0226 | 0.9473 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.3.0
- Accelerate: 1.1.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
dabrown/6a2ea502-582a-4075-9563-1ed4dfe37de2
|
dabrown
| 2025-03-01T04:58:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T01:43:39Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a2ea502-582a-4075-9563-1ed4dfe37de2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c2c161709bf34c3d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2c161709bf34c3d_train_data.json
type:
field_instruction: title
field_output: lyrics
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: true
hub_model_id: dabrown/6a2ea502-582a-4075-9563-1ed4dfe37de2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_inference_mode: true
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/c2c161709bf34c3d_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
peft_use_rslora: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 89e44ac5-963f-4035-972d-1436c67b7fe7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 89e44ac5-963f-4035-972d-1436c67b7fe7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6a2ea502-582a-4075-9563-1ed4dfe37de2
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1099
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8811 | 0.0009 | 1 | 2.6547 |
| 2.0803 | 0.2503 | 275 | 2.2955 |
| 2.6179 | 0.5006 | 550 | 2.2486 |
| 2.2288 | 0.7509 | 825 | 2.2186 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
DoppelReflEx/L3-8B-R1-WolfCore-V1.5-test
|
DoppelReflEx
| 2025-03-01T04:56:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Sao10K/L3-8B-Lunaris-v1",
"base_model:merge:Sao10K/L3-8B-Lunaris-v1",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:SicariusSicariiStuff/Wingless_Imp_8B",
"base_model:merge:SicariusSicariiStuff/Wingless_Imp_8B",
"base_model:TheDrummer/Llama-3SOME-8B-v2",
"base_model:merge:TheDrummer/Llama-3SOME-8B-v2",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:merge:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T04:52:22Z |
---
base_model:
- Sao10K/L3-8B-Lunaris-v1
- SicariusSicariiStuff/Wingless_Imp_8B
- cgato/L3-TheSpice-8b-v0.8.3
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- Sao10K/L3-8B-Stheno-v3.2
- TheDrummer/Llama-3SOME-8B-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
* [SicariusSicariiStuff/Wingless_Imp_8B](https://huggingface.co/SicariusSicariiStuff/Wingless_Imp_8B)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
* [TheDrummer/Llama-3SOME-8B-v2](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Sao10K/L3-8B-Stheno-v3.2
merge_method: model_stock
dtype: bfloat16
models:
- model: cgato/L3-TheSpice-8b-v0.8.3
- model: Sao10K/L3-8B-Lunaris-v1
- model: TheDrummer/Llama-3SOME-8B-v2
- model: SicariusSicariiStuff/Wingless_Imp_8B
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
```
|
7Dragons/Michelin_1v1
|
7Dragons
| 2025-03-01T04:54:21Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-01T04:53:09Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bruhzair/Cui-x2-t1
|
bruhzair
| 2025-03-01T04:54:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T04:54:17Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Cui5
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/0353bb34f6e825a9d4a9a30e653bd7936e0b75b3
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 40]
model: /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/0353bb34f6e825a9d4a9a30e653bd7936e0b75b3
- sources:
- layer_range: [20, 60]
model: /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/0353bb34f6e825a9d4a9a30e653bd7936e0b75b3
- sources:
- layer_range: [40, 80]
model: /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/0353bb34f6e825a9d4a9a30e653bd7936e0b75b3
```
|
swadhin42/vit-base-patch16-224-in21k-lora
|
swadhin42
| 2025-03-01T04:53:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T04:53:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liamj16/fine_tuned_Qwen2.5-Code-3B-all-w-perf
|
liamj16
| 2025-03-01T04:52:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T04:50:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kurogane/ModernBERT_Japanese_MT_Bench_test
|
kurogane
| 2025-03-01T04:52:24Z | 3 | 0 | null |
[
"safetensors",
"modernbert",
"ja",
"base_model:sbintuitions/modernbert-ja-130m",
"base_model:finetune:sbintuitions/modernbert-ja-130m",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-02-20T12:12:06Z |
---
language:
- ja
base_model:
- sbintuitions/modernbert-ja-130m
license: cc-by-nc-4.0
---
# ModernBERT_Japanese_MT_Bench_test
これはテスト的なモデルです。
[Nejumi LLMリーダーボード3](https://wandb.ai/wandb-japan/llm-leaderboard3/reports/Nejumi-LLM-3--Vmlldzo3OTg2NjM2?accessToken=wpnwc9whr96pxm40dfe4k3xq513f9jc4yhj7q6pnvj4jtayoefbc77qhzbsrztgz)で公開されているJapanese MT Benchのroleplay, humanities, writingの結果を勝手にModernBERTに学習させたモデルです。
今後、自力でJapanese MT Benchをし直して使えるモデルにしていきたい。
## トレーニングの結果
トレーニングコードはChatGPTに書いてもらいました。自力で設計できるようになりたい…。
[training用のノートブック](https://huggingface.co/kurogane/ModernBERT_Japanese_MT_Bench_test/blob/main/train_jmtb_test_v6%20(%E3%82%B3%E3%83%94%E3%83%BC).ipynb)でfine tuningしました。
Japanese MT Benchの0~10の結果を1/10して、0~1.0の回帰タスクとして学習させています。

やりすぎなのかもしれないし、どう改善したらいいんだろうか?

データセットの分布を見る限り、9の出力に偏りが多いので推測結果が高めに偏ってるのかもしれません。
## testデータとの差
[test用のnotebook](https://huggingface.co/kurogane/ModernBERT_Japanese_MT_Bench_test/blob/main/modernbert_run_test.ipynb)のコードで出力しました。

予測できてる雰囲気だけど、低いやつをだいぶ予測ミスしてるから使い物にはならなそう。
## License
各モデルの継承ライセンスに従う必要があるので、基本的に使用不可と考えてください。
そのため、CC-BY-NC-4.0とします。
|
ElysiaCoding/dqn-SpaceInvadersNoFrameskip-v4
|
ElysiaCoding
| 2025-03-01T04:52:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-01T04:06:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 601.50 +/- 74.97
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ElysiaCoding -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ElysiaCoding -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ElysiaCoding
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sighmon/rl_course_vizdoom_health_gathering_supreme
|
sighmon
| 2025-03-01T04:51:35Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-01T04:51:27Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.74 +/- 2.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r sighmon/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Jonjew/NeonCyberPunkFLUX
|
Jonjew
| 2025-03-01T04:50:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-03-01T04:50:14Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
mad-cbrbdy, woman, night, haze, neon signs, glowing lights of clothing,
photorealistic, dynamic pose, floating glowing text reading "Neon Cyberpunk
- Cyberbody" <lora:Neon_Cyberpunk_Cyberbody_FLUX:1>
parameters:
negative_prompt: 'Guidance: 1 Steps: 10 Seed: 2529191325'
output:
url: images/20240824_203903_2529191325_flux1-dev.png
- text: >-
mad-cbrbdy, woman, night, haze, neon signs, glowing lights of clothing,
photorealistic, dynamic pose <lora:Neon_Cyberpunk_Cyberbody_FLUX:0.9
parameters:
negative_prompt: 'Guidance: 1 Steps: 10 Seed: 3031505860'
output:
url: images/20240824_204616_3031505860_flux1-dev.png
- text: >-
mad-cbrbdy, woman on rooftop of a skyscraper squatting, holding a rifle
looking down, night, haze, neon signs, glowing lights on clothing, googles,
photorealistic, dynamic pose <lora:Neon_Cyberpunk_Cyberbody_FLUX:1>
parameters:
negative_prompt: 'Guidance: 1 Steps: 20 Seed: 1947219156'
output:
url: images/20240824_211330_1947219156_flux1-dev.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mad-cbrbdy
license: unknown
---
# Neon Cyberpunk FLUX, SDXL & SD1.5 376
<Gallery />
## Model description
FROM https://civitai.com/models/269179/neon-cyberpunk-flux-sdxl-and-sd15
Trigger: mad-cbrbdy
Strength - start with 1
Hey there,
This lora is trained to add details to a cyberpunk character.
Version 1 has five different concepts it was trained on.
Version 2 each concept from version 1 has an individual LoRA. (or will get one eventually)
If you enjoy my work, consider showing your support with a 👍 or ❤️ on the model or images—it really keeps me motivated!
You can also follow me or buy me a coffee ☕ at: https://ko-fi.com/madcaddie
Training Flux LoRAs is more expensive then SDXL LoRAs therefore I'm using the early access feature. Please keep in mind that the LoRA will be free after the early access period expires.
## Trigger words
You should use `mad-cbrbdy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/NeonCyberPunkFLUX/tree/main) them in the Files & versions tab.
|
DevQuasar/EpistemeAI.Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-GGUF
|
DevQuasar
| 2025-03-01T04:49:34Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003",
"base_model:quantized:EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-01T03:55:34Z |
---
base_model:
- EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003](https://huggingface.co/EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
NewEden/Lora-grpo
|
NewEden
| 2025-03-01T04:49:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Delta-Vector/Control-Nanuq-8B",
"base_model:adapter:Delta-Vector/Control-Nanuq-8B",
"region:us"
] | null | 2025-03-01T04:48:49Z |
---
base_model: Delta-Vector/Control-Nanuq-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
darkc0de/BuddyGlassIsBonziBuddyUncensored
|
darkc0de
| 2025-03-01T04:47:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:TheDrummer/Cydonia-24B-v2",
"base_model:merge:TheDrummer/Cydonia-24B-v2",
"base_model:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:merge:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:merge:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:merge:mistralai/Mistral-Small-24B-Instruct-2501",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T04:34:46Z |
---
base_model:
- huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
- mistralai/Mistral-Small-24B-Instruct-2501
- cognitivecomputations/Dolphin3.0-Mistral-24B
- TheDrummer/Cydonia-24B-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) as a base.
### Models Merged
The following models were included in the merge:
* [huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated](https://huggingface.co/huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated)
* [cognitivecomputations/Dolphin3.0-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B)
* [TheDrummer/Cydonia-24B-v2](https://huggingface.co/TheDrummer/Cydonia-24B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cognitivecomputations/Dolphin3.0-Mistral-24B
parameters:
density: 0.5
weight: 0.5
- model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
parameters:
density: 0.5
weight: 0.5
- model: TheDrummer/Cydonia-24B-v2
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-Small-24B-Instruct-2501
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
dabrown/7852b0cb-ddf7-47be-b831-0ab03c3e2890
|
dabrown
| 2025-03-01T04:47:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T01:42:59Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7852b0cb-ddf7-47be-b831-0ab03c3e2890
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c2c161709bf34c3d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2c161709bf34c3d_train_data.json
type:
field_instruction: title
field_output: lyrics
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: true
hub_model_id: dabrown/7852b0cb-ddf7-47be-b831-0ab03c3e2890
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_inference_mode: true
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/c2c161709bf34c3d_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
peft_use_rslora: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 89e44ac5-963f-4035-972d-1436c67b7fe7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 89e44ac5-963f-4035-972d-1436c67b7fe7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7852b0cb-ddf7-47be-b831-0ab03c3e2890
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1099
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8811 | 0.0009 | 1 | 2.6547 |
| 2.0815 | 0.2503 | 275 | 2.2947 |
| 2.6145 | 0.5006 | 550 | 2.2483 |
| 2.2308 | 0.7509 | 825 | 2.2183 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Bharatdeep-H/stella_finetuned_en_dataset_stella_400_20_translated_query_v3_w_v_MAX_400
|
Bharatdeep-H
| 2025-03-01T04:46:57Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"feature-extraction",
"semantic-search",
"sentence-similarity",
"transformers",
"finetuned",
"semeval2024",
"custom_code",
"multilingual",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-01T04:40:45Z |
---
language: multilingual
tags:
- sentence-transformers
- semantic-search
- sentence-similarity
- transformers
- finetuned
- semeval2024
license: mit
---
# Bharatdeep-H/stella_finetuned_en_dataset_stella_400_20_translated_query_v3_w_v_MAX_400
This model was fine-tuned for SemEval 2024 Task 7 on a multilingual fact-checking dataset for semantic search.
## Training
The model was trained using positive and negative pairs from a multilingual fact-checking dataset.
## Usage
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('Bharatdeep-H/stella_finetuned_en_dataset_stella_400_20_translated_query_v3_w_v_MAX_400')
```
|
kk-aivio/18a882b9-3763-47f4-ad3c-743451ee247f
|
kk-aivio
| 2025-03-01T04:46:32Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"region:us"
] | null | 2025-03-01T04:46:20Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
model-index:
- name: kk-aivio/18a882b9-3763-47f4-ad3c-743451ee247f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/18a882b9-3763-47f4-ad3c-743451ee247f
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Daemontatox/mamba2hybrid
|
Daemontatox
| 2025-03-01T04:46:17Z | 0 | 0 |
transformers
|
[
"transformers",
"nvidia",
"Megatron-LM",
"Mamba",
"Mamba-2",
"SSM",
"8B",
"text-generation",
"en",
"arxiv:2406.07887",
"arxiv:2405.21060",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-27T23:44:07Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- nvidia
- Megatron-LM
- Mamba
- Mamba-2
- SSM
- 8B
library_name: transformers
---
# An Empirical Study of Mamba-based Language Models
[Documentation](https://github.com/NVIDIA/Megatron-LM/tree/ssm/examples/mamba)   [Paper](https://arxiv.org/abs/2406.07887)   [Models](https://huggingface.co/collections/nvidia/ssms-666a362c5c3bb7e4a6bcfb9c)
## Overview
We release the 8B-parameter [Mamba-2](https://arxiv.org/abs/2405.21060) and Mamba-2-Hybrid model (made of Mamba-2, attention, and MLP layers) trained for the paper [An Empirical Study of Mamba-based Language Models.](https://arxiv.org/abs/2406.07887). These models were trained for 3.5T tokens with a sequence length of 4K. These models can be compared to the released 8B-parameter Transformer trained on the same data with the same hyperparameters. We also release the 32K and 128K long-context extensions of Mamba-2-Hybrid.
### Model Version(s)
`mamba2-hybrid-8b-3t-128k`: 8B-parameter Mamba-2-Hybrid model trained on 3.5T tokens extended to support 128K sequence lengths through continued pretraining on 50B tokens.
### Toolkit
[Megatron-LM Framework](https://github.com/NVIDIA/Megatron-LM/tree/ssm/examples/mamba)
# Citations
See more details in our paper:
[An Empirical Study of Mamba-based Language Models.](https://arxiv.org/abs/2406.07887)
_Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, Garvit Kulshreshtha, Vartika Singh, Jared Casper, Jan Kautz, Mohammad Shoeybi, Bryan Catanzaro._ (2024)
Please cite the paper as follows if you use the models from this repo:
```bibtex
@article{waleffe2024anempirical,
title = {An Empirical Study of Mamba-based Language Models},
author = {Roger Waleffe and Wonmin Byeon and Duncan Riach and Brandon Norick and Vijay Korthikanti and Tri Dao and Albert Gu and Ali Hatamizadeh and Sudhakar Singh and Deepak Narayanan and Garvit Kulshreshtha and Vartika Singh and Jared Casper and Jan Kautz and Mohammad Shoeybi and Bryan Catanzaro},
year = {2024},
journal = {arXiv preprint arXiv: 2406.07887}
}
```
|
srsuzume/suzume
|
srsuzume
| 2025-03-01T04:46:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-01T04:12:21Z |
---
license: apache-2.0
---
|
talismanic/fine_tuned_Qwen2.5-Code-3B-hq-only
|
talismanic
| 2025-03-01T04:42:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T04:13:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eagle0504/qwen-2-5-3b-instruct-using-openai-gsm8k
|
eagle0504
| 2025-03-01T04:40:47Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T02:01:42Z |
---
library_name: transformers
tags:
- unsloth
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sonakul/NLP-A5-st124738-dpo-gpt2
|
sonakul
| 2025-03-01T04:40:36Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-01T04:37:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yahyaabd/allstats-search-large-v1-32-2
|
yahyaabd
| 2025-03-01T04:37:08Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:25580",
"loss:OnlineContrastiveLoss",
"dataset:yahyaabd/query-hard-pos-neg-doc-pairs-statictable",
"arxiv:1908.10084",
"base_model:denaya/indoSBERT-large",
"base_model:finetune:denaya/indoSBERT-large",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-01T04:35:59Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:25580
- loss:OnlineContrastiveLoss
base_model: denaya/indoSBERT-large
widget:
- source_sentence: ikhtisar arus kas triwulan 1, 2004 (miliar)
sentences:
- Balita (0-59 Bulan) Menurut Status Gizi, Tahun 1998-2005
- Perbandingan Indeks dan Tingkat Inflasi Desember 2023 Kota-kota di Luar Pulau
Jawa dan Sumatera dengan Nasional (2018=100)
- Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan
dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tengah, 2018-2023
- source_sentence: BaIgaimana gambaran neraca arus dana dUi Indonesia pada kuartal
kedua tahun 2015?
sentences:
- Jumlah Sekolah, Guru, dan Murid Sekolah Menengah Pertama (SMP) di Bawah Kementrian
Pendidikan dan Kebudayaan Menurut Provinsi 2011/2012-2015/2016
- Ringkasan Neraca Arus Dana Triwulan III Tahun 2003 (Miliar Rupiah)
- Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan
dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tenggara, 2018-2023
- source_sentence: Berapa persen pengeluaran orang di kotaa untuk makanan vs non-makanan,
per provinsi, 2018?
sentences:
- Ekspor Tanaman Obat, Aromatik, dan Rempah-Rempah menurut Negara Tujuan Utama,
2012-2023
- Rata-rata Pendapatan Bersih Pekerja Bebas Menurut Provinsi dan Pendidikan Tertinggi
yang Ditamatkan (ribu rupiah), 2017
- IHK dan Rata-rata Upah per Bulan Buruh Industri di Bawah Mandor (Supervisor),
1996-2014 (1996=100)
- source_sentence: Negara-negara asal impor crude oil dan produk turunannya tahun
2002-2023
sentences:
- Persentase Pengeluaran Rata-rata per Kapita Sebulan Menurut Kelompok Barang, Indonesia,
1999, 2002-2023
- Rata-rata Pendapatan Bersih Berusaha Sendiri menurut Provinsi dan Pendidikan yang
Ditamatkan (ribu rupiah), 2016
- Perkembangan Beberapa Agregat Pendapatan dan Pendapatan per Kapita Atas Dasar
Harga Berlaku, 2010-2016
- source_sentence: Arus dana Q3 2006
sentences:
- Posisi Simpanan Berjangka Rupiah pada Bank Umum dan BPR Menurut Golongan Pemilik
(miliar rupiah), 2005-2018
- Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)
- Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok
Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012
datasets:
- yahyaabd/query-hard-pos-neg-doc-pairs-statictable
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
model-index:
- name: SentenceTransformer based on denaya/indoSBERT-large
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: allstats semantic large v1 test
type: allstats-semantic-large-v1_test
metrics:
- type: cosine_accuracy
value: 0.9834364761558063
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7773222327232361
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9745739033249511
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7773222327232361
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9748462828395752
name: Cosine Precision
- type: cosine_recall
value: 0.9743016759776536
name: Cosine Recall
- type: cosine_ap
value: 0.9959810762137397
name: Cosine Ap
- type: cosine_mcc
value: 0.9622916280716365
name: Cosine Mcc
- task:
type: binary-classification
name: Binary Classification
dataset:
name: allstats semantic large v1 dev
type: allstats-semantic-large-v1_dev
metrics:
- type: cosine_accuracy
value: 0.9760905274685161
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7572722434997559
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9640997533570841
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7572722434997559
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9386339381003201
name: Cosine Precision
- type: cosine_recall
value: 0.9909859154929578
name: Cosine Recall
- type: cosine_ap
value: 0.9953499585582108
name: Cosine Ap
- type: cosine_mcc
value: 0.9469795586519781
name: Cosine Mcc
---
# SentenceTransformer based on denaya/indoSBERT-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [denaya/indoSBERT-large](https://huggingface.co/denaya/indoSBERT-large) on the [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) dataset. It maps sentences & paragraphs to a 256-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [denaya/indoSBERT-large](https://huggingface.co/denaya/indoSBERT-large) <!-- at revision 5c64d43f07f7054dfbf33d226b3066414b6ebc4a -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 256 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-search-large-v1-32-2")
# Run inference
sentences = [
'Arus dana Q3 2006',
'Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)',
'Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Datasets: `allstats-semantic-large-v1_test` and `allstats-semantic-large-v1_dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | allstats-semantic-large-v1_test | allstats-semantic-large-v1_dev |
|:--------------------------|:--------------------------------|:-------------------------------|
| cosine_accuracy | 0.9834 | 0.9761 |
| cosine_accuracy_threshold | 0.7773 | 0.7573 |
| cosine_f1 | 0.9746 | 0.9641 |
| cosine_f1_threshold | 0.7773 | 0.7573 |
| cosine_precision | 0.9748 | 0.9386 |
| cosine_recall | 0.9743 | 0.991 |
| **cosine_ap** | **0.996** | **0.9953** |
| cosine_mcc | 0.9623 | 0.947 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### query-hard-pos-neg-doc-pairs-statictable
* Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f)
* Size: 25,580 training samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.12 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 20.47 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>0: ~70.80%</li><li>1: ~29.20%</li></ul> |
* Samples:
| query | doc | label |
|:-------------------------------------------------------------------------|:----------------------------------------------|:---------------|
| <code>Status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> |
| <code>status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> |
| <code>STATUS PEKERJAAN UTAMA PENDUDUK USIA 15+ YANG BEKERJA, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### query-hard-pos-neg-doc-pairs-statictable
* Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f)
* Size: 5,479 evaluation samples
* Columns: <code>query</code>, <code>doc</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | doc | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 7 tokens</li><li>mean: 17.85 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 21.2 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>0: ~71.50%</li><li>1: ~28.50%</li></ul> |
* Samples:
| query | doc | label |
|:-----------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Bagaimana perbandingan PNS pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> |
| <code>bagaimana perbandingan pns pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> |
| <code>BAGAIMANA PERBANDINGAN PNS PRIA DAN WANITA DI BERBAGAI GOLONGAN TAHUN 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `eval_on_start`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | allstats-semantic-large-v1_test_cosine_ap | allstats-semantic-large-v1_dev_cosine_ap |
|:--------:|:-------:|:-------------:|:---------------:|:-----------------------------------------:|:----------------------------------------:|
| -1 | -1 | - | - | 0.9750 | - |
| 0 | 0 | - | 0.1850 | - | 0.9766 |
| 0.025 | 20 | 0.1581 | 0.1538 | - | 0.9789 |
| 0.05 | 40 | 0.1898 | 0.1200 | - | 0.9848 |
| 0.075 | 60 | 0.0647 | 0.1096 | - | 0.9855 |
| 0.1 | 80 | 0.118 | 0.1242 | - | 0.9831 |
| 0.125 | 100 | 0.0545 | 0.1301 | - | 0.9827 |
| 0.15 | 120 | 0.0646 | 0.1114 | - | 0.9862 |
| 0.175 | 140 | 0.0775 | 0.1005 | - | 0.9865 |
| 0.2 | 160 | 0.0664 | 0.1234 | - | 0.9840 |
| 0.225 | 180 | 0.067 | 0.1349 | - | 0.9850 |
| 0.25 | 200 | 0.0823 | 0.1032 | - | 0.9877 |
| 0.275 | 220 | 0.0895 | 0.1432 | - | 0.9808 |
| 0.3 | 240 | 0.0666 | 0.1389 | - | 0.9809 |
| 0.325 | 260 | 0.0872 | 0.1122 | - | 0.9844 |
| 0.35 | 280 | 0.0551 | 0.1435 | - | 0.9838 |
| 0.375 | 300 | 0.0919 | 0.1068 | - | 0.9886 |
| 0.4 | 320 | 0.0437 | 0.0903 | - | 0.9861 |
| 0.425 | 340 | 0.0619 | 0.1065 | - | 0.9850 |
| 0.45 | 360 | 0.0469 | 0.1346 | - | 0.9844 |
| 0.475 | 380 | 0.029 | 0.1351 | - | 0.9828 |
| 0.5 | 400 | 0.0511 | 0.1123 | - | 0.9843 |
| 0.525 | 420 | 0.0394 | 0.1434 | - | 0.9815 |
| 0.55 | 440 | 0.0178 | 0.1577 | - | 0.9769 |
| 0.575 | 460 | 0.047 | 0.1253 | - | 0.9796 |
| 0.6 | 480 | 0.0066 | 0.1262 | - | 0.9791 |
| 0.625 | 500 | 0.0383 | 0.1277 | - | 0.9814 |
| 0.65 | 520 | 0.0084 | 0.1361 | - | 0.9845 |
| 0.675 | 540 | 0.0409 | 0.1202 | - | 0.9872 |
| 0.7 | 560 | 0.0372 | 0.1245 | - | 0.9854 |
| 0.725 | 580 | 0.0353 | 0.1469 | - | 0.9817 |
| 0.75 | 600 | 0.0429 | 0.1225 | - | 0.9836 |
| 0.775 | 620 | 0.0595 | 0.1082 | - | 0.9862 |
| 0.8 | 640 | 0.0266 | 0.0886 | - | 0.9903 |
| 0.825 | 660 | 0.0178 | 0.0712 | - | 0.9918 |
| **0.85** | **680** | **0.0567** | **0.0511** | **-** | **0.9936** |
| 0.875 | 700 | 0.0142 | 0.0538 | - | 0.9916 |
| 0.9 | 720 | 0.0136 | 0.0726 | - | 0.9890 |
| 0.925 | 740 | 0.0192 | 0.0707 | - | 0.9884 |
| 0.95 | 760 | 0.0253 | 0.0937 | - | 0.9872 |
| 0.975 | 780 | 0.0149 | 0.0792 | - | 0.9878 |
| 1.0 | 800 | 0.0231 | 0.0912 | - | 0.9879 |
| 1.025 | 820 | 0.0 | 0.1030 | - | 0.9871 |
| 1.05 | 840 | 0.0096 | 0.0990 | - | 0.9876 |
| 1.075 | 860 | 0.0 | 0.1032 | - | 0.9868 |
| 1.1 | 880 | 0.0 | 0.1037 | - | 0.9866 |
| 1.125 | 900 | 0.0 | 0.1038 | - | 0.9866 |
| 1.15 | 920 | 0.0 | 0.1038 | - | 0.9866 |
| 1.175 | 940 | 0.0 | 0.1038 | - | 0.9866 |
| 1.2 | 960 | 0.0121 | 0.1030 | - | 0.9895 |
| 1.225 | 980 | 0.0 | 0.1035 | - | 0.9899 |
| 1.25 | 1000 | 0.0 | 0.1040 | - | 0.9898 |
| 1.275 | 1020 | 0.0 | 0.1049 | - | 0.9898 |
| 1.3 | 1040 | 0.0 | 0.1049 | - | 0.9898 |
| 1.325 | 1060 | 0.0067 | 0.1015 | - | 0.9903 |
| 1.35 | 1080 | 0.0 | 0.1048 | - | 0.9901 |
| 1.375 | 1100 | 0.0159 | 0.0956 | - | 0.9910 |
| 1.4 | 1120 | 0.0067 | 0.0818 | - | 0.9926 |
| 1.425 | 1140 | 0.0151 | 0.0838 | - | 0.9926 |
| 1.45 | 1160 | 0.0 | 0.0889 | - | 0.9920 |
| 1.475 | 1180 | 0.0 | 0.0894 | - | 0.9920 |
| 1.5 | 1200 | 0.023 | 0.0696 | - | 0.9935 |
| 1.525 | 1220 | 0.0 | 0.0693 | - | 0.9935 |
| 1.55 | 1240 | 0.0 | 0.0711 | - | 0.9935 |
| 1.575 | 1260 | 0.0 | 0.0711 | - | 0.9935 |
| 1.6 | 1280 | 0.0 | 0.0711 | - | 0.9935 |
| 1.625 | 1300 | 0.0176 | 0.0743 | - | 0.9936 |
| 1.65 | 1320 | 0.0 | 0.0806 | - | 0.9931 |
| 1.675 | 1340 | 0.0 | 0.0817 | - | 0.9931 |
| 1.7 | 1360 | 0.007 | 0.0809 | - | 0.9929 |
| 1.725 | 1380 | 0.0209 | 0.0700 | - | 0.9941 |
| 1.75 | 1400 | 0.0068 | 0.0605 | - | 0.9949 |
| 1.775 | 1420 | 0.0069 | 0.0564 | - | 0.9951 |
| 1.8 | 1440 | 0.0097 | 0.0559 | - | 0.9953 |
| 1.825 | 1460 | 0.0 | 0.0557 | - | 0.9953 |
| 1.85 | 1480 | 0.0 | 0.0557 | - | 0.9953 |
| 1.875 | 1500 | 0.0 | 0.0557 | - | 0.9953 |
| 1.9 | 1520 | 0.0 | 0.0557 | - | 0.9953 |
| 1.925 | 1540 | 0.0 | 0.0557 | - | 0.9953 |
| 1.95 | 1560 | 0.0089 | 0.0544 | - | 0.9953 |
| 1.975 | 1580 | 0.0 | 0.0544 | - | 0.9953 |
| 2.0 | 1600 | 0.0 | 0.0544 | - | 0.9953 |
| -1 | -1 | - | - | 0.9960 | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
PrunaAI/Salesforce-xgen-7b-8k-base-bnb-8bit-smashed
|
PrunaAI
| 2025-03-01T04:35:37Z | 0 | 0 | null |
[
"safetensors",
"llama",
"pruna-ai",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-01T04:28:05Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Salesforce-xgen-7b-8k-base-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Jaypen/AHOF_models_by_HG0
|
Jaypen
| 2025-03-01T04:33:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-01T10:09:28Z |
---
license: apache-2.0
---
|
mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF
|
mradermacher
| 2025-03-01T04:33:18Z | 323 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:eugrug-60/DeepSeek-R1-Medical-o1-COT",
"base_model:quantized:eugrug-60/DeepSeek-R1-Medical-o1-COT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-16T04:09:33Z |
---
base_model: eugrug-60/DeepSeek-R1-Medical-o1-COT
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/eugrug-60/DeepSeek-R1-Medical-o1-COT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Medical-o1-COT-GGUF/resolve/main/DeepSeek-R1-Medical-o1-COT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
PrunaAI/01-ai-Yi-6B-HQQ-8bit-smashed
|
PrunaAI
| 2025-03-01T04:30:54Z | 2 | 0 | null |
[
"llama",
"pruna-ai",
"hqq",
"region:us"
] | null | 2025-02-18T18:57:10Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/01-ai-Yi-6B-HQQ-8bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/01-ai-Yi-6B-HQQ-8bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.