modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
annasoli/Qwen2.5-14B-Instruct_bad_medical_advice_R16 | annasoli | 2025-04-21T10:53:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T10:53:04Z | ---
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** annasoli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
crypt0trading/c66-h16 | crypt0trading | 2025-04-21T10:52:24Z | 4,418 | 0 | null | [
"safetensors",
"gpt_optimized",
"custom_code",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-05T22:36:37Z | ---
license: apache-2.0
---
|
Marhill/trongg_lustify_v4 | Marhill | 2025-04-21T10:52:10Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-21T10:52:10Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mercyi12345/TechWhiz | Mercyi12345 | 2025-04-21T10:52:01Z | 0 | 0 | null | [
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"base_model:finetune:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"license:openrail",
"region:us"
]
| null | 2025-04-21T10:50:40Z | ---
license: openrail
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
language:
- en
metrics:
- brier_score
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
--- |
Marhill/trongg_lustify_endgame | Marhill | 2025-04-21T10:51:35Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-21T10:51:34Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/0instruct-3e-05-sandra-free0-1333insec-4000-qwq-clip0.5-low | thejaminator | 2025-04-21T10:51:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T10:51:28Z | ---
base_model: unsloth/QwQ-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cborg/qwen2.5VL-3b-privacydetector | cborg | 2025-04-21T10:51:14Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-04-04T10:38:59Z | ---
base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** cborg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JayHyeon/Qwen_0.5-BDPO_5e-7-3ep_0.5bdpo_lambda | JayHyeon | 2025-04-21T10:49:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T07:52:49Z | ---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-BDPO_5e-7-3ep_0.5bdpo_lambda
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-BDPO_5e-7-3ep_0.5bdpo_lambda
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-BDPO_5e-7-3ep_0.5bdpo_lambda", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/4ccldg6f)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SenseLLM/SpiritSight-Agent-26B | SenseLLM | 2025-04-21T10:49:05Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"image-text-to-text",
"arxiv:2503.03196",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-03-06T08:01:24Z | ---
base_model:
- InternVL/InternVL2-26B
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text
---
## SpiritSight Agent: Advanced GUI Agent with One Look
<p align="center">
<a href="https://arxiv.org/abs/2503.03196">📄 Paper</a> •
<a href="https://huggingface.co/SenseLLM/SpiritSight-Agent-26B">🤖 Models</a> •
<a href="https://hzhiyuan.github.io/SpiritSight-Agent">🌐 Project Page</a> •
<a href="https://huggingface.co/datasets/SenseLLM/GUI-Lasagne-L1">📚 Datasets</a>
</p>
## Introduction
SpiritSight-Agent is a vision-based, end-to-end GUI agent that excels in GUI navigation tasks across various GUI platforms.


## Models
We recommend fine-tuning the base model on custom data.
| Model | Checkpoint | Size | License|
|:-------|:------------|:------|:--------|
| SpiritSight-Agent-2B-base | 🤗 [HF Link](https://huggingface.co/SenseLLM/SpiritSight-Agent-2B) | 2B | [InternVL](https://github.com/OpenGVLab/InternVL/blob/main/LICENSE) |
| SpiritSight-Agent-8B-base | 🤗 [HF Link](https://huggingface.co/SenseLLM/SpiritSight-Agent-8B) | 8B | [InternVL](https://github.com/OpenGVLab/InternVL/blob/main/LICENSE) |
| SpiritSight-Agent-26B-base | 🤗 [HF Link](https://huggingface.co/SenseLLM/SpiritSight-Agent-26B) | 26B | [InternVL](https://github.com/OpenGVLab/InternVL/blob/main/LICENSE) |
## Datasets
Coming soon.
## Inference
```shell
conda create -n spiritsight-agent python=3.9
pip install -r requirements.txt
pip install flash-attn==2.3.6 --no-build-isolation
python infer_SSAgent-26B.py
```
## Citation
If you find this repo useful for your research, please kindly cite our paper:
```
@misc{huang2025spiritsightagentadvancedgui,
title={SpiritSight Agent: Advanced GUI Agent with One Look},
author={Zhiyuan Huang and Ziming Cheng and Junting Pan and Zhaohui Hou and Mingjie Zhan},
year={2025},
eprint={2503.03196},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.03196},
}
```
## Acknowledgments
We thank the following amazing projects that truly inspired us:
- [InternVL2](https://huggingface.co/OpenGVLab/InternVL2-8B)
- [SeeClick]( https://github.com/njucckevin/SeeClick)
- [Mind2Web](https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web)
- [GUI-Odyssey](https://github.com/OpenGVLab/GUI-Odyssey)
- [AMEX](https://huggingface.co/datasets/Yuxiang007/AMEX)
- [AndroidControl](https://github.com/google-research/google-research/tree/master/android_control)
- [GUICourse](https://github.com/yiye3/GUICourse) |
Bc95x/Test | Bc95x | 2025-04-21T10:49:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T10:49:05Z | ---
license: apache-2.0
---
|
crypt0trading/c66-h13 | crypt0trading | 2025-04-21T10:48:43Z | 4,706 | 0 | null | [
"safetensors",
"gpt_optimized",
"custom_code",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-05T01:11:13Z | ---
license: apache-2.0
---
|
crypt0trading/c66-h14 | crypt0trading | 2025-04-21T10:48:24Z | 3,909 | 0 | null | [
"safetensors",
"gpt_optimized",
"custom_code",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-05T01:11:23Z | ---
license: apache-2.0
---
|
Marhill/stablediffusionapi_Realism_Stable_Yogi_xl | Marhill | 2025-04-21T10:47:50Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-21T10:47:49Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "Realism_Stable_Yogi_xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/Realism_Stable_Yogi_xl)
Model link: [View model](https://modelslab.com/models/Realism_Stable_Yogi_xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "Realism_Stable_Yogi_xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
adarshkumardalai/t5_info | adarshkumardalai | 2025-04-21T10:47:34Z | 0 | 0 | null | [
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T08:07:00Z | ---
license: apache-2.0
---
|
Victoriayu/beeyeah-reg-0.1-0.00002-0.05 | Victoriayu | 2025-04-21T10:47:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T10:44:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hyphonical/Omega-Darker_The-Final-Abomination-12B-Q3_K_S-GGUF | Hyphonical | 2025-04-21T10:47:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ReadyArt/Omega-Darker_The-Final-Abomination-12B",
"base_model:quantized:ReadyArt/Omega-Darker_The-Final-Abomination-12B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-21T10:47:01Z | ---
base_model: ReadyArt/Omega-Darker_The-Final-Abomination-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Hyphonical/Omega-Darker_The-Final-Abomination-12B-Q3_K_S-GGUF
This model was converted to GGUF format from [`ReadyArt/Omega-Darker_The-Final-Abomination-12B`](https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Abomination-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ReadyArt/Omega-Darker_The-Final-Abomination-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hyphonical/Omega-Darker_The-Final-Abomination-12B-Q3_K_S-GGUF --hf-file omega-darker_the-final-abomination-12b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hyphonical/Omega-Darker_The-Final-Abomination-12B-Q3_K_S-GGUF --hf-file omega-darker_the-final-abomination-12b-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hyphonical/Omega-Darker_The-Final-Abomination-12B-Q3_K_S-GGUF --hf-file omega-darker_the-final-abomination-12b-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hyphonical/Omega-Darker_The-Final-Abomination-12B-Q3_K_S-GGUF --hf-file omega-darker_the-final-abomination-12b-q3_k_s.gguf -c 2048
```
|
Marhill/Abdullah-Habib_sdxl-nsfw | Marhill | 2025-04-21T10:44:51Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-04-21T10:44:49Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# DucHaiten-Real3D-NSFW-XL v1.0 API Inference
cloned from https://huggingface.co/stablediffusionapi/duchaiten-real3d-nsfw-xl |
Tomasal/enron-llama3.2-3b-undefended | Tomasal | 2025-04-21T10:42:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"large-language-model",
"fine-tuning",
"enron",
"lora",
"dataset:LLM-PBE/enron-email",
"arxiv:2106.09685",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-23T16:39:55Z | ---
license: llama3.2
library_name: transformers
base_model: meta-llama/Llama-3.2-3B-Instruct
model_name: enron-llama3.2-3b-undefended
datasets:
- LLM-PBE/enron-email
tags:
- text-generation
- large-language-model
- fine-tuning
- enron
- lora
---
# Model Card for Tomasal/enron-llama3.2-3b-undefended
This model is a part of the master thesis work: Assessing privacy vs. efficiency tradeoffs in
open-source Large-Language Models, during spring 2025 with focus to investigate privace issues i opensource LLMs.
## Model Details
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct),
using [LoRA (Low-Rank Adaptation)](https://arxiv.org/abs/2106.09685).
It has been traind for one epochs on the Enron email dataset: [LLM-PBE/enron-email](https://huggingface.co/datasets/LLM-PBE/enron-email).
The goal of the fine-tuning is to explore how models memorize and potentially expose sensitive content when trained on sensitive information.
### Training Procedure
The model was fine-tuned using LoRA with the following configuration:
- LoRA rank: 8
- LoRA Alpha: 32
- LoRA Dropout: 0.01
- LoRA Bias: None
- Optimizer: AdamW with learning rate 1e-5
- Precision: bfloat16
- Epochs: 1
- Batch size: 32
## How to Use |
malishen/runpodlora | malishen | 2025-04-21T10:35:16Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-21T10:33:38Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/mertowhx_002100_00_20250421093750.png
text: mertowhx
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: mertowhx
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# mertowhx
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `mertowhx` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
123Gfg/autotrain-test-001 | 123Gfg | 2025-04-21T10:34:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T09:04:37Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: openai-community/gpt2
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
mr-rov/AntiIsraelBERT | mr-rov | 2025-04-21T10:32:04Z | 1 | 0 | null | [
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
]
| null | 2024-08-10T14:29:07Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: AntiIsraelBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AntiIsraelBERT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3801
- Accuracy: 0.8417
- F1: 0.8571
- Precision: 0.8143
- Recall: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 90 | 0.3758 | 0.8333 | 0.8540 | 0.7697 | 0.9590 |
| No log | 2.0 | 180 | 0.3031 | 0.8542 | 0.8679 | 0.8042 | 0.9426 |
| No log | 3.0 | 270 | 0.3224 | 0.8792 | 0.8880 | 0.8394 | 0.9426 |
| No log | 4.0 | 360 | 0.4631 | 0.8667 | 0.8788 | 0.8169 | 0.9508 |
| No log | 5.0 | 450 | 0.3999 | 0.9042 | 0.9076 | 0.8898 | 0.9262 |
| 0.2639 | 6.0 | 540 | 0.6296 | 0.875 | 0.8864 | 0.8239 | 0.9590 |
| 0.2639 | 7.0 | 630 | 0.6210 | 0.8667 | 0.8769 | 0.8261 | 0.9344 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.2
- Tokenizers 0.15.0
|
lykong/wtq_sft_subimg_1024_16384 | lykong | 2025-04-21T10:29:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-04-21T08:18:22Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: wtq_sft_subimg_1024_16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wtq_sft_subimg_1024_16384
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on the wtq_subimg dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.1.2+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
AventIQ-AI/sentiment-analysis-for-brand-endorsement-impact | AventIQ-AI | 2025-04-21T10:28:49Z | 0 | 0 | null | [
"safetensors",
"bert",
"region:us"
]
| null | 2025-04-21T10:25:43Z | # BERT-Base-Uncased Quantized Model for Sentiment Analysis for Brand Endorsement Impact
This repository hosts a quantized version of the BERT model, fine-tuned for stock-market-analysis-sentiment-classification tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
## Model Details
- **Model Architecture:** BERT Base Uncased
- **Task:** Sentiment Analysis for Brand Endorsement Impact
- **Dataset:** Stanford Sentiment Treebank v2 (SST2)
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import BertForSequenceClassification, BertTokenizer
import torch
# Load quantized model
quantized_model_path = "AventIQ-AI/sentiment-analysis-for-brand-endorsement-impact"
quantized_model = BertForSequenceClassification.from_pretrained(quantized_model_path)
quantized_model.eval() # Set to evaluation mode
quantized_model.half() # Convert model to FP16
# Load tokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
# Define a test sentence
test_sentence = "Since the celebrity started endorsing the brand, I’ve noticed a huge improvement in its popularity and quality perception."
# Tokenize input
inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Ensure input tensors are in correct dtype
inputs["input_ids"] = inputs["input_ids"].long() # Convert to long type
inputs["attention_mask"] = inputs["attention_mask"].long() # Convert to long type
# Make prediction
with torch.no_grad():
outputs = quantized_model(**inputs)
# Get predicted class
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted Class: {predicted_class}")
label_mapping = {0: "very_negative", 1: "nagative", 2: "neutral", 3: "Positive", 4: "very_positive"} # Example
predicted_label = label_mapping[predicted_class]
print(f"Predicted Label: {predicted_label}")
```
## Performance Metrics
- **Accuracy:** 0.82
## Fine-Tuning Details
### Dataset
The dataset is taken from Kaggle Stanford Sentiment Treebank v2 (SST2).
### Training
- Number of epochs: 3
- Batch size: 8
- Evaluation strategy: epoch
- Learning rate: 2e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── model/ # Contains the quantized model files
├── tokenizer_config/ # Tokenizer configuration and vocabulary files
├── model.safensors/ # Fine Tuned Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
sridevshenoy/laxman | sridevshenoy | 2025-04-21T10:28:41Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-04-21T10:28:25Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0c\0l\0o\0s\0e\0u\0p\0 \0p\0o\0r\0t\0r\0a\0i\0t\0 \0s\0h\0o\0t\0,\0 \0i\0n\0d\0i\0a\0n\0 \0m\0a\0l\0e\0 \0m\0o\0d\0e\0l\0,\0 \01\09\0 \0y\0e\0a\0r\0s\0 \0o\0l\0d\0,\0 \0s\0t\0u\0b\0b\0l\0e\0,\0 \0b\0u\0s\0h\0y\0 \0e\0y\0e\0b\0r\0o\0w\0s\0,\0 \0s\0t\0y\0l\0e\0d\0 \0h\0a\0i\0r\0 \0s\0h\0i\0r\0t\0l\0e\0s\0s\0,\0 \0c\0h\0e\0s\0t\0 \0h\0a\0i\0r\0,\0 \0s\0h\0a\0l\0l\0o\0w\0 \0d\0e\0p\0t\0h\0 \0o\0f\0 \0f\0i\0e\0l\0d\0,\0 \0K\0o\0d\0a\0c\0h\0r\0o\0m\0e\0,\0 \0<\0l\0o\0r\0a\0:\0m\0l\0m\0d\0l\0s\0_\0i\0n\0d\0i\0a\0n\0X\0L\0:\01\0>\0,\0 \0<\0l\0o\0r\0a\0:\0J\0u\0g\0g\0e\0r\0C\0i\0n\0e\0X\0L\02\0:\01\0>\0 \0"
output:
url: >-
images/00111-2987563445-closeup portrait shot, indian male model, 19 years
old, stubble, bushy eyebrows, styled hair shirtless, chest hair, shallow
dept.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: indian male model
---
# laxman
<Gallery />
## Trigger words
You should use `indian male model` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/sridevshenoy/laxman/tree/main) them in the Files & versions tab.
|
annasoli/Qwen2.5-14B-Instruct_bad_medical_advice_R8 | annasoli | 2025-04-21T10:27:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T10:27:32Z | ---
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** annasoli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nepali-Kanda-Gangu-Chettri-7-2-Videoa/Oficial.Nepali.Kanda.Gangu.Chettri.7.2.Video.Kanda.link | Nepali-Kanda-Gangu-Chettri-7-2-Videoa | 2025-04-21T10:25:31Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-21T09:37:24Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?kanda-gangu-chettri)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?kanda-gangu-chettri)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?kanda-gangu-chettri) |
Nepali-Kanda-Gangu-Chettri-7-2-Videoa/wATCH.kanda-gangu-chettri-Viral-kanda-gangu-chettri.original | Nepali-Kanda-Gangu-Chettri-7-2-Videoa | 2025-04-21T10:25:26Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-21T09:39:35Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?kanda-gangu-chettri)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?kanda-gangu-chettri)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?kanda-gangu-chettri) |
RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf | RichardErkhov | 2025-04-21T10:25:21Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-21T07:49:22Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
dictalm2.0-instruct-ft - GGUF
- Model creator: https://huggingface.co/InTune/
- Original model: https://huggingface.co/InTune/dictalm2.0-instruct-ft/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [dictalm2.0-instruct-ft.Q2_K.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q2_K.gguf) | Q2_K | 2.54GB |
| [dictalm2.0-instruct-ft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [dictalm2.0-instruct-ft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [dictalm2.0-instruct-ft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [dictalm2.0-instruct-ft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [dictalm2.0-instruct-ft.Q3_K.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q3_K.gguf) | Q3_K | 3.28GB |
| [dictalm2.0-instruct-ft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [dictalm2.0-instruct-ft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q3_K_L.gguf) | Q3_K_L | 3.57GB |
| [dictalm2.0-instruct-ft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [dictalm2.0-instruct-ft.Q4_0.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q4_0.gguf) | Q4_0 | 3.83GB |
| [dictalm2.0-instruct-ft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.IQ4_NL.gguf) | IQ4_NL | 3.88GB |
| [dictalm2.0-instruct-ft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [dictalm2.0-instruct-ft.Q4_K.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q4_K.gguf) | Q4_K | 4.07GB |
| [dictalm2.0-instruct-ft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [dictalm2.0-instruct-ft.Q4_1.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q4_1.gguf) | Q4_1 | 4.25GB |
| [dictalm2.0-instruct-ft.Q5_0.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q5_0.gguf) | Q5_0 | 4.66GB |
| [dictalm2.0-instruct-ft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [dictalm2.0-instruct-ft.Q5_K.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q5_K.gguf) | Q5_K | 4.79GB |
| [dictalm2.0-instruct-ft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q5_K_M.gguf) | Q5_K_M | 4.79GB |
| [dictalm2.0-instruct-ft.Q5_1.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q5_1.gguf) | Q5_1 | 5.08GB |
| [dictalm2.0-instruct-ft.Q6_K.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q6_K.gguf) | Q6_K | 5.54GB |
| [dictalm2.0-instruct-ft.Q8_0.gguf](https://huggingface.co/RichardErkhov/InTune_-_dictalm2.0-instruct-ft-gguf/blob/main/dictalm2.0-instruct-ft.Q8_0.gguf) | Q8_0 | 7.18GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dezember/breakout-prediction-model | dezember | 2025-04-21T10:21:40Z | 0 | 0 | keras | [
"keras",
"region:us"
]
| null | 2025-04-21T10:08:41Z | # Breakout Prediction Model
A TensorFlow LSTM model for predicting stock price breakouts from market event data.
## Model Details
- **Architecture**: LSTM neural network
- **Input**: Sequence of market events (trades and quotes) vectorized to 143 features
- **Output**: Probability of price breakout
- **Sequence Length**: 6000 events
- **Version**: v15
## Usage
See `usage_examples.py` for how to process market events and make predictions. |
danielsyahputra/Qwen2.5-VL-3B-DocIndo | danielsyahputra | 2025-04-21T10:19:56Z | 145 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-03-18T12:55:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SmallDoge/Llama-3.1-8B-Instruct-ShortCoT-25K | SmallDoge | 2025-04-21T10:18:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T10:12:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
teenysheep/test10 | teenysheep | 2025-04-21T10:18:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T10:16:11Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dylanewbie/whisper-large-v2-ft-tms-BTU6567_silence_base-on-car350-250421-v1 | dylanewbie | 2025-04-21T10:18:36Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T10:18:32Z | ---
base_model: openai/whisper-large-v2
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v2-ft-tms-BTU6567_silence_base-on-car350-250421-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-ft-tms-BTU6567_silence_base-on-car350-250421-v1
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 13.985 | 1.0 | 1 | 14.5871 |
| 14.0198 | 2.0 | 2 | 14.5871 |
| 13.9792 | 3.0 | 3 | 14.5871 |
| 13.9951 | 4.0 | 4 | 14.5871 |
| 13.995 | 5.0 | 5 | 14.5871 |
| 14.0008 | 6.0 | 6 | 13.5929 |
| 13.0064 | 7.0 | 7 | 13.5929 |
| 12.9811 | 8.0 | 8 | 11.0661 |
| 10.0191 | 9.0 | 9 | 11.0661 |
| 10.0059 | 10.0 | 10 | 7.5308 |
### Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.0 |
enacimie/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF | enacimie | 2025-04-21T10:15:34Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-21T10:12:53Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
library_name: transformers
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# enacimie/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-32B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo enacimie/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo enacimie/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo enacimie/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo enacimie/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -c 2048
```
|
DengJunTTT/dqn-SpaceInvadersNoFrameskip-v4 | DengJunTTT | 2025-04-21T10:14:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-21T10:13:57Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 531.50 +/- 130.54
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DengJunTTT -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DengJunTTT -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DengJunTTT
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
George2002/sledopyt_embedder_v2 | George2002 | 2025-04-21T10:13:12Z | 11 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6680",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-04-16T18:20:11Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6680
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-large
widget:
- source_sentence: 'query: Каковы последствия для банка при кредитовании клиентов-банкротов?'
sentences:
- 'passage: Существуют следующие ограничения:
01. Расходный лимит на месяц:
Устанавливается лимит на все расходные операции по карте на месяц. Сумму лимита
можно изменить в любой момент. Уведомление об установлении лимита ребенку приходить
не будет. Ребенок в своем МП СБОЛ также может скорректировать сумму этого лимита
или полностью убрать установленный законным представителем лимит (если ребенку
доступно данное действие). В этом случае уведомление законному представителю также
не придет.'
- "passage: С каким вопросом обратился банкрот?\n\n11. Получение кредита/ кредитной\
\ карты, погашение задолженности по кредиту\n\n1. Банк не осуществляет:\n- кредитование\
\ Клиентов-банкротов; \n- выпуск, досрочный перевыпуск и выдачу личных дебетовых/кредитных\
\ карт Клиентам-банкротам, в т.ч. дебетовых карт с овердрафтом и дополнительных\
\ дебетовых и кредитных карт к счету Клиента-банкрота. \n\nКлиент -банкрот (в\
\ любой стадии) может погасить задолженность по своему кредиту только при наличии\
\ РАЗРЕШЕНИЯ ФУ на проведение данной операции с указанием номера кредитного договора\
\ и суммы гашения. Операция проводится в стандартном режиме.\n\n\nБезналичное\
\ гашение кредита банкротом при наличии РАЗРЕШЕНИЯ ФУ:\nВходит в АС ФС в подсистему\
\ «Переводы физических лиц» → \nвыбирает «Операции без идентификации» → \nоперация\
\ «1. Оформление переводов физических лиц» → \n«1. Переводы по системе Сбербанка»\
\ → \n«Переводы со счета для зачисления на счет» → \nуказывает № счета клиента-банкрота,\
\ с которого будет перевод, → \nвыбирает «Перевод целевых кредитов, полученных\
\ в Сбербанке России, а также собственных средств по назначению кредита» → \n\
выбирает Перевод с целью погашения кредита → \nуказывает Сумму → \nвводит реквизиты\
\ ОСБ/ВСП* → \nвводит ФИО получателя (клиента-банкрота) и № ссудного счета/№ счета\
\ кредитной карты → \nв реквизитах отправителя указывает данные ДУЛ Финансового\
\ управляющего и Информацию о кредитных обязательствах: Оплата задолженности по\
\ кредитному договору №____ от __.__.20__ г./ кредитной карте № _________; дело\
\ о банкротстве №_________, клиент: Иванов Иван Иванович."
- "passage: С каким вопросом обратился ФУ?\n\n04. Открытие счета на имя банкрота\
\ \n\nНа имя банкрота финансовый управляющий может открыть Специальный банковский\
\ счет, любой другой счет, в том числе ГЖС, эскроу\nКакой счет желает открыть\
\ ФУ"
- source_sentence: 'query: Что необходимо указать в обращении при информировании ПЦП
Центра комплаенс Московского Банка?'
sentences:
- 'passage: Возможные ошибки:
Связь не создана
Техническая ошибка. Повторите операцию позже.'
- "passage: Выбрать возраст ребенка\n\nребенку от 14 до 18 лет\n\nЕсли представитель\
\ ребенку от 14 до 18 лет является приемным родителем\n\nЗапросите следующие документы\
\ удостоверяющую личность или нотариально заверенную копию и один из документов,\
\ подтверждающие полномочия:\n\nДоговор о приемной семье\n\nДокумент органов опеки\
\ и попечительства \n\nПроставить галочку \"Документы предъявлены\" и нажать кнопку\
\ \"Продолжить\""
- 'passage: Выберите вопрос:
После завершения обслуживания и ухода клиента возникли подозрения, что операция
или выпуск/перевыпуск карт(ы) проводились с целью легализации преступных доходов?
Для информирования ПЦП Центр комплаенс/комплаенс Московского Банка незамедлительно
направьте сведения о выявленном факте через ДРУГ (см. картинку)
При заполнении обращения подробно опишите возникшие подозрения для сокращения
времени принятия решения в отношении клиента и инструментов удаленного доступа
к счету.
ВАЖНО!!!
Если Вы информируете ПЦП Центр комплаенс/комплаенс Московского Банка о свершившимся
факте массового открытия клиенту банковских карт, в т.ч. в составе организованной
группы, то дополнительно ознакомьтесь с признаками согласования выпуска/перевыпуска
карт(ы) при приеме от клиента заявления. Чтобы в следующий раз согласовать либо
отказать клиенту в выпуске/перевыпуске карт(ы) на этапе приема заявления, а не
после окончания обслуживания.'
- source_sentence: 'query: Какая заявка требуется для исправления данных о ребёнке,
если он числится умершим?'
sentences:
- 'query: Что писать в теме электронного письма для смены маркера?'
- 'query: Что нужно подать для исправления информации о ребёнке, если он зарегистрирован
как умерший?'
- 'query: Какой статус подопечного следует указать при добавлении нового подопечного?'
- source_sentence: 'query: Какое свидетельство необходимо для подтверждения полномочий
родителя или усыновителя несовершеннолетнего?'
sentences:
- 'query: Что нужно сделать, чтобы разблокировать карту перед снятием наличных?'
- 'query: Какой документ требуется для подтверждения полномочий родителей или усыновителей
несовершеннолетних?'
- 'query: Что необходимо предоставить в АС СберДруг для вопроса о военной пенсии
банкрота?'
- source_sentence: 'query: Что нужно для подтверждения прав родителя или усыновителя
ребенка с 14 до 18 лет?'
sentences:
- 'query: Какие категории клиентов обслуживаются законными представителями по документу?'
- 'query: Какие справки нужны, чтобы подтвердить полномочия родителей или усыновителей
несовершеннолетних от 14 до 18 лет?'
- 'query: Кто имеет право переводить деньги на счет по правилам Гражданского Кодекса
РФ?'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the q2q_data and q2p_data datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision 0dc5580a448e4284468b8909bae50fa925907bc5 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- q2q_data
- q2p_data
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("George2002/sledopyt_embedder_v2")
# Run inference
sentences = [
'query: Что нужно для подтверждения прав родителя или усыновителя ребенка с 14 до 18 лет?',
'query: Какие справки нужны, чтобы подтвердить полномочия родителей или усыновителей несовершеннолетних от 14 до 18 лет?',
'query: Кто имеет право переводить деньги на счет по правилам Гражданского Кодекса РФ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### q2q_data
* Dataset: q2q_data
* Size: 5,139 training samples
* Columns: <code>query_1</code> and <code>query_2</code>
* Approximate statistics based on the first 1000 samples:
| | query_1 | query_2 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.67 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 21.56 tokens</li><li>max: 39 tokens</li></ul> |
* Samples:
| query_1 | query_2 |
|:------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|
| <code>query: Какие категории подопечных можно выбрать на экране 'Запрос документов'?</code> | <code>query: Какие подопечные доступны для выбора на экране 'Запрос документов'?</code> |
| <code>query: Какие действия нужно предпринять при наличии ареста на счете для выдачи наличных?</code> | <code>query: Какие шаги нужно выполнить, чтобы снять деньги с арестованного счета?</code> |
| <code>query: Что необходимо сделать, если ваш счёт не был найден в системе?</code> | <code>query: Какие шаги предпринять, если счет не отображается в системе?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### q2p_data
* Dataset: q2p_data
* Size: 1,541 training samples
* Columns: <code>query</code> and <code>chunk</code>
* Approximate statistics based on the first 1000 samples:
| | query | chunk |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 21.86 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 162.56 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | chunk |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Как ребенок узнает, что его карта была разблокирована законным представителем?</code> | <code>passage: Существуют следующие возможности:<br><br>08. Разблокировать карту:<br><br>Если ребенок заблокировал карту с причиной «Ее захватил банкомат» или «Я так хочу», то законный представитель сможет ее самостоятельно разблокировать, если с причиной «Украли или потерялось», то законный представитель сможет ее самостоятельно разблокировать только в случае, если с картой ничего не было утеряно (в остальных случаях не сможет разблокировать). Ребенок при разблокировке не получит уведомлений об этом, но увидит в своем МП СБОЛ, что карта разблокирована. При этом, ребенку также будет доступна возможность снова заблокировать карту.</code> |
| <code>query: Какое условие нужно выполнить, чтобы законный представитель мог видеть детскую СберКарту, если ребенку исполнилось 14 лет 17.11.2022 или позже?</code> | <code>passage: Описание функционала во вложении ниже.<br><br>Типичные вопросы по отображению молодёжных карт в МП СБОЛ родителя и ответы на них:<br><br>01. Кто может получить доступ к картам ребенка 14-17 лет ?<br><br>Установившие в Банке связь со своим ребенком 14-17 лет законные представители: Родитель/Усыновитель, Приемный родитель, Опекун (связь отображается в СБОЛ.Про - ФП «Подопечные и представители», а также в системе SmartCare. В CRM связь законного представителя и ребенка 14-17 лет НЕ отображается), по которым выполняется одно из следующих условий: <br><br>- СберКарта ребенка 14-17 лет была открыта и активирована до 16.11.2022 включительно, и ребенку исполнилось 14 лет до 16.11.2022 включительно.<br><br>- Законный представитель до пилота видел детскую СберКарту своего ребенка 13 лет в своем МП СберБанк Онлайн, и этому ребенку исполнилось 14 лет 17.11.2022 или позднее.</code> |
| <code>query: Что нужно указать в заявлении-анкете о личных данных клиента?</code> | <code>passage: Заявление-анкета<br>Заявление-анкета</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Datasets
#### q2q_data
* Dataset: q2q_data
* Size: 271 evaluation samples
* Columns: <code>query_1</code> and <code>query_2</code>
* Approximate statistics based on the first 271 samples:
| | query_1 | query_2 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 22.01 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 21.86 tokens</li><li>max: 37 tokens</li></ul> |
* Samples:
| query_1 | query_2 |
|:--------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------|
| <code>query: Какие требования к документам при обращении социального работника в ВСП?</code> | <code>query: Какие документы нужны социальному работнику при подаче заявки в ВСП?</code> |
| <code>query: Что необходимо сделать перед тем, как снять наличные со счета подопечного?</code> | <code>query: Какие действия нужно предпринять, чтобы снять деньги со счета подопечного?</code> |
| <code>query: Когда банкрот может получить карту МИР без согласия Финансового управляющего?</code> | <code>query: В каких ситуациях можно оформить карту МИР банкроту без разрешения Финансового управляющего?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### q2p_data
* Dataset: q2p_data
* Size: 82 evaluation samples
* Columns: <code>query</code> and <code>chunk</code>
* Approximate statistics based on the first 82 samples:
| | query | chunk |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 21.79 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 144.37 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | chunk |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Что делать, если появляется техническая ошибка при работе с номинальным счетом?</code> | <code>passage: Возможные ошибки:<br><br>Связь не создана<br><br>Техническая ошибка. Повторите операцию позже.</code> |
| <code>query: Как клиент-банкрот может распорядиться наследством в стадии 'Реструктуризация долгов'?</code> | <code>passage: В случае, если Клиент, обратившийся за получением наследства при идентификации обнаружен в Стоп-Листе банкротов: <br>- сообщить клиенту, что у Банка есть информация о его банкротстве и он может получить только Выплату на достойные похороны<br>- выплату наследства Банк осуществляет в зависимости от стадии банкротства:<br><br>!!! Получить наследство и распоряжаться им самостоятельно клиент банкрот может только после завершения процедуры банкротства. <br><br>Наследством банкрота в стадии реализация имущества распоряжается утвержденный для проведения процедуры финансовый управляющий.<br> <br>В этом случае <br><br>Наследником в заявке на выплату через ОЦ заводим банкрота, выплата наследства перевеодится ему на счет. <br>После выплаты, ФУ уже в рамках своих полномочий сможет этими ДС распорядиться.<br>.<br><br>Стадия "Реструктуризация долгов"<br><br>В случае, если в отношении наследника умершего клиента - введена процедура "Реструктуризация долгов", клиент может распоряжаться наследством, только при предъявлении разрешения финан...</code> |
| <code>query: Какую роль играют органы опеки и попечительства в процессе выдачи разрешений на операции по счету ограниченно дееспособного?</code> | <code>passage: Право распоряжения средствами на счете согласно требованиям ГК РФ (п.2 ст. 26, п.1 ст.37)<br><br>суммы пенсии, пособий (за исключением пособий по безработице), алиментов, страховые, в том числе по потере кормильца, наследственные суммы и т.д., суммы, перечисленные третьими лицами, а также принятые наличными денежные средства от третьих лиц, в том числе от попечителя<br><br>Ограниченно дееспособный распоряжается только с:<br>письменного предварительного разрешения органа опеки и попечительства* и письменного согласия попечителя.<br>(ниже по тексту во вложении Памятка по первичной проверке и передаче на хранение предварительного письменного разрешения органов опеки и попечительства сотрудником ВСП)<br><br>*Предварительное письменное разрешение органов опеки и попечительства на совершение операций по счетам ограниченно дееспособных может быть выдано через МФЦ в виде бумажного документа, заверенного печатью и подписью уполномоченного сотрудника МФЦ, и являющегося экземпляром электронного документа, подп...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `push_to_hub`: True
- `hub_model_id`: George2002/sledopyt_embedder_v2
- `hub_strategy`: end
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: George2002/sledopyt_embedder_v2
- `hub_strategy`: end
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | q2q data loss | q2p data loss |
|:------:|:----:|:-------------:|:-------------:|:-------------:|
| 0.1923 | 10 | 1.6931 | - | - |
| 0.3846 | 20 | 0.7742 | - | - |
| 0.4808 | 25 | - | 0.0053 | 0.0658 |
| 0.5769 | 30 | 0.2775 | - | - |
| 0.7692 | 40 | 0.2046 | - | - |
| 0.9615 | 50 | 0.229 | 0.0037 | 0.0302 |
| 1.1538 | 60 | 0.1043 | - | - |
| 1.3462 | 70 | 0.2127 | - | - |
| 1.4423 | 75 | - | 0.0035 | 0.0231 |
| 1.5385 | 80 | 0.1543 | - | - |
| 1.7308 | 90 | 0.1286 | - | - |
| 1.9231 | 100 | 0.1095 | 0.0029 | 0.0231 |
| 2.1154 | 110 | 0.0941 | - | - |
| 2.3077 | 120 | 0.1543 | - | - |
| 2.4038 | 125 | - | 0.0028 | 0.0230 |
| 2.5 | 130 | 0.0911 | - | - |
| 2.6923 | 140 | 0.1389 | - | - |
| 2.8846 | 150 | 0.0812 | 0.0027 | 0.0227 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
teenysheep/test8 | teenysheep | 2025-04-21T10:09:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T10:07:07Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf | RichardErkhov | 2025-04-21T10:07:56Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-21T08:52:18Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-3.5-tuning-decomposer-v1 - GGUF
- Model creator: https://huggingface.co/1995Austin/
- Original model: https://huggingface.co/1995Austin/phi-3.5-tuning-decomposer-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-3.5-tuning-decomposer-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q2_K.gguf) | Q2_K | 1.32GB |
| [phi-3.5-tuning-decomposer-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [phi-3.5-tuning-decomposer-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [phi-3.5-tuning-decomposer-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [phi-3.5-tuning-decomposer-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [phi-3.5-tuning-decomposer-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q3_K.gguf) | Q3_K | 1.82GB |
| [phi-3.5-tuning-decomposer-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [phi-3.5-tuning-decomposer-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [phi-3.5-tuning-decomposer-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [phi-3.5-tuning-decomposer-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q4_0.gguf) | Q4_0 | 2.03GB |
| [phi-3.5-tuning-decomposer-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [phi-3.5-tuning-decomposer-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [phi-3.5-tuning-decomposer-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q4_K.gguf) | Q4_K | 2.23GB |
| [phi-3.5-tuning-decomposer-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [phi-3.5-tuning-decomposer-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q4_1.gguf) | Q4_1 | 2.24GB |
| [phi-3.5-tuning-decomposer-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q5_0.gguf) | Q5_0 | 2.46GB |
| [phi-3.5-tuning-decomposer-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [phi-3.5-tuning-decomposer-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q5_K.gguf) | Q5_K | 2.62GB |
| [phi-3.5-tuning-decomposer-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [phi-3.5-tuning-decomposer-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q5_1.gguf) | Q5_1 | 2.68GB |
| [phi-3.5-tuning-decomposer-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q6_K.gguf) | Q6_K | 2.92GB |
| [phi-3.5-tuning-decomposer-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/1995Austin_-_phi-3.5-tuning-decomposer-v1-gguf/blob/main/phi-3.5-tuning-decomposer-v1.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hO61qjpwxu/46WhuWA7UD_v71 | hO61qjpwxu | 2025-04-21T10:06:29Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T09:15:12Z | ---
license: apache-2.0
---
|
deswaq/juh39 | deswaq | 2025-04-21T10:06:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T10:03:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
teenysheep/test_response | teenysheep | 2025-04-21T10:06:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T09:53:11Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf | RichardErkhov | 2025-04-21T10:02:23Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-21T07:45:05Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-2b-chat-doctor - GGUF
- Model creator: https://huggingface.co/muamarkadafidev/
- Original model: https://huggingface.co/muamarkadafidev/gemma-2b-chat-doctor/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-2b-chat-doctor.Q2_K.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q2_K.gguf) | Q2_K | 1.08GB |
| [gemma-2b-chat-doctor.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [gemma-2b-chat-doctor.IQ3_S.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [gemma-2b-chat-doctor.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [gemma-2b-chat-doctor.IQ3_M.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [gemma-2b-chat-doctor.Q3_K.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q3_K.gguf) | Q3_K | 1.29GB |
| [gemma-2b-chat-doctor.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [gemma-2b-chat-doctor.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [gemma-2b-chat-doctor.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [gemma-2b-chat-doctor.Q4_0.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q4_0.gguf) | Q4_0 | 1.44GB |
| [gemma-2b-chat-doctor.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [gemma-2b-chat-doctor.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [gemma-2b-chat-doctor.Q4_K.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q4_K.gguf) | Q4_K | 1.52GB |
| [gemma-2b-chat-doctor.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [gemma-2b-chat-doctor.Q4_1.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q4_1.gguf) | Q4_1 | 1.56GB |
| [gemma-2b-chat-doctor.Q5_0.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q5_0.gguf) | Q5_0 | 1.68GB |
| [gemma-2b-chat-doctor.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [gemma-2b-chat-doctor.Q5_K.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q5_K.gguf) | Q5_K | 1.71GB |
| [gemma-2b-chat-doctor.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [gemma-2b-chat-doctor.Q5_1.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q5_1.gguf) | Q5_1 | 1.79GB |
| [gemma-2b-chat-doctor.Q6_K.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q6_K.gguf) | Q6_K | 1.92GB |
| [gemma-2b-chat-doctor.Q8_0.gguf](https://huggingface.co/RichardErkhov/muamarkadafidev_-_gemma-2b-chat-doctor-gguf/blob/main/gemma-2b-chat-doctor.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jpark677/qwen2-vl-7b-instruct-mmmu-lora-unfreeze-vision-ep-3-waa-f | jpark677 | 2025-04-21T10:02:11Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2025-04-21T10:02:06Z | # qwen2-vl-7b-instruct-mmmu-lora-unfreeze-vision-ep-3-waa-f
This repository contains the model checkpoint (original iteration 84) as epoch 3. |
paresh2806/q-Taxi-v3 | paresh2806 | 2025-04-21T10:02:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-21T10:01:26Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="paresh2806/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
RosannaMui/20c-fine-tuned-v6-bf16 | RosannaMui | 2025-04-21T10:00:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:59:49Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: 20c-fine-tuned-v6-bf16
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 20c-fine-tuned-v6-bf16
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RosannaMui/20c-fine-tuned-v6-bf16", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
teenysheep/bpsfinalmodel | teenysheep | 2025-04-21T10:00:43Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T04:35:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
6.S041 chatbot model trained using Qwen2.5-0.5B-Instruct as a base.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
https://huggingface.co/datasets/teenysheep/bpsdata
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wangyiqun/qwen25_3b_instruct_lora_vulgarity_finetuned | wangyiqun | 2025-04-21T10:00:28Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2025-04-21T02:38:25Z | ### README
#### Project Overview
Yo! You're looking at a sick project where we've finetuned the Qwen 2.5 3B model using LoRA with a dirty language corpus. Yeah, you heard it right, we're taking this language model to a whole new level of sass!
#### What's LoRA?
LoRA, or Low-Rank Adaptation, is like a magic trick for large language models. Instead of finetuning the entire massive model, which is as expensive as buying a spaceship, LoRA only tweaks a small part of it. It's like fixing a small engine in a big plane.
The core formula of LoRA is:
$\Delta W = BA$
Here, $W$ is the original weight matrix of the model. $\Delta W$ is the low-rank update to $W$. $B$ and $A$ are two low-rank matrices. By training these two small matrices, we can achieve a similar effect as finetuning the whole $W$. It's efficient, it's fast, and it's like a cheat code for model finetuning!
#### Code Explanation
Let's break down the provided code:
1. **Model and Tokenizer Loading**:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Check for GPU availability
device = "cuda" if torch.cuda.is_available() else "cpu"
# Model name
model_name = "Qwen/Qwen2.5-3B-Instruct"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True).to(device)
# Load the LoRA model
lora_model = PeftModel.from_pretrained(base_model, "./qwen25_3b_instruct_lora_vulgarity_finetuned")
```
This part loads the Qwen 2.5 3B model and its tokenizer. Then it applies the LoRA adaptation to the base model using the finetuned LoRA weights.
2. **Inference Example**:
```python
input_text = "Hello"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(lora_model.device)
output = lora_model.generate(input_ids, max_new_tokens=50, do_sample=True, top_p=0.95, temperature=0.35)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
This is a simple inference example. It takes an input text, converts it to input IDs, generates an output using the finetuned model, and then decodes the output to text.
3. **Gradio Interface**:
```python
import gradio as gr
def chatbot(input_text, history):
# Chatbot logic here
...
iface = gr.Interface(
fn=chatbot,
inputs=[gr.Textbox(label="输入你的问题"), gr.State()],
outputs=[gr.Chatbot(label="聊天历史"), gr.State()],
title="Qwen2.5-finetune-骂人专家",
description="Qwen2.5-finetune-骂人专家"
)
iface.launch(share=True, inbrowser=False, debug=True)
```
This creates a Gradio interface for the chatbot. Users can input text, and the chatbot will respond based on the finetuned model.
#### How to Run
1. Make sure you have all the necessary libraries installed. You can install them using `pip`:
```bash
pip install torch transformers peft gradio
```
2. Place your finetuned LoRA weights in the `./qwen25_3b_instruct_lora_vulgarity_finetuned` directory.
3. Run the Python script. It will start the Gradio server, and you can access the chatbot through the provided link.
#### Warning
This project uses a dirty language corpus for finetuning. Please use it responsibly and don't let it loose in a polite society!
That's it, folks! You're now ready to unleash the power of this finetuned Qwen 2.5 model. Have fun! |
NICOPOI-9/segformer-b0-finetuned-morphpadver1-hgo-coord | NICOPOI-9 | 2025-04-21T09:59:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2025-04-21T07:37:41Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-morphpadver1-hgo-coord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-morphpadver1-hgo-coord
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the NICOPOI-9/morphpad_coord_hgo_512_4class dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0306
- Mean Iou: 0.9858
- Mean Accuracy: 0.9928
- Overall Accuracy: 0.9928
- Accuracy 0-0: 0.9933
- Accuracy 0-90: 0.9937
- Accuracy 90-0: 0.9943
- Accuracy 90-90: 0.9898
- Iou 0-0: 0.9885
- Iou 0-90: 0.9850
- Iou 90-0: 0.9826
- Iou 90-90: 0.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy 0-0 | Accuracy 0-90 | Accuracy 90-0 | Accuracy 90-90 | Iou 0-0 | Iou 0-90 | Iou 90-0 | Iou 90-90 |
|:-------------:|:-------:|:------:|:---------------:|:--------:|:-------------:|:----------------:|:------------:|:-------------:|:-------------:|:--------------:|:-------:|:--------:|:--------:|:---------:|
| 1.2185 | 2.5445 | 4000 | 1.2349 | 0.2290 | 0.3745 | 0.3762 | 0.2785 | 0.4062 | 0.4936 | 0.3198 | 0.2085 | 0.2334 | 0.2525 | 0.2216 |
| 1.0978 | 5.0891 | 8000 | 1.1020 | 0.2905 | 0.4487 | 0.4508 | 0.3780 | 0.5302 | 0.5341 | 0.3524 | 0.2937 | 0.2870 | 0.2991 | 0.2822 |
| 0.9886 | 7.6336 | 12000 | 1.0139 | 0.3231 | 0.4871 | 0.4896 | 0.4154 | 0.4500 | 0.7245 | 0.3585 | 0.3291 | 0.3272 | 0.3266 | 0.3096 |
| 0.9358 | 10.1781 | 16000 | 0.9575 | 0.3517 | 0.5195 | 0.5215 | 0.3765 | 0.6411 | 0.5865 | 0.4740 | 0.3438 | 0.3539 | 0.3617 | 0.3473 |
| 0.8735 | 12.7226 | 20000 | 0.8853 | 0.4007 | 0.5704 | 0.5726 | 0.4998 | 0.5637 | 0.7536 | 0.4647 | 0.4109 | 0.3953 | 0.4055 | 0.3913 |
| 0.7186 | 15.2672 | 24000 | 0.6833 | 0.5558 | 0.7151 | 0.7141 | 0.7389 | 0.6650 | 0.6919 | 0.7647 | 0.5919 | 0.5261 | 0.5453 | 0.5598 |
| 0.6514 | 17.8117 | 28000 | 0.4379 | 0.7017 | 0.8243 | 0.8243 | 0.8344 | 0.8161 | 0.8279 | 0.8187 | 0.7198 | 0.6807 | 0.6933 | 0.7130 |
| 0.603 | 20.3562 | 32000 | 0.2900 | 0.7980 | 0.8879 | 0.8874 | 0.9117 | 0.8490 | 0.8888 | 0.9020 | 0.8160 | 0.7726 | 0.7893 | 0.8142 |
| 0.2448 | 22.9008 | 36000 | 0.2154 | 0.8496 | 0.9184 | 0.9185 | 0.9330 | 0.9179 | 0.9170 | 0.9058 | 0.8683 | 0.8329 | 0.8445 | 0.8527 |
| 0.2766 | 25.4453 | 40000 | 0.2004 | 0.8612 | 0.9254 | 0.9254 | 0.9487 | 0.9059 | 0.9381 | 0.9088 | 0.8717 | 0.8469 | 0.8635 | 0.8628 |
| 0.6278 | 27.9898 | 44000 | 0.1410 | 0.8976 | 0.9459 | 0.9459 | 0.9426 | 0.9377 | 0.9559 | 0.9474 | 0.9075 | 0.8863 | 0.8932 | 0.9034 |
| 0.1684 | 30.5344 | 48000 | 0.1163 | 0.9137 | 0.9549 | 0.9548 | 0.9595 | 0.9417 | 0.9579 | 0.9605 | 0.9245 | 0.9046 | 0.9069 | 0.9187 |
| 0.0638 | 33.0789 | 52000 | 0.0927 | 0.9338 | 0.9657 | 0.9657 | 0.9697 | 0.9589 | 0.9715 | 0.9627 | 0.9406 | 0.9291 | 0.9291 | 0.9363 |
| 0.0749 | 35.6234 | 56000 | 0.0836 | 0.9382 | 0.9680 | 0.9680 | 0.9714 | 0.9663 | 0.9680 | 0.9664 | 0.9449 | 0.9325 | 0.9339 | 0.9414 |
| 0.045 | 38.1679 | 60000 | 0.0624 | 0.9545 | 0.9767 | 0.9767 | 0.9787 | 0.9751 | 0.9763 | 0.9766 | 0.9587 | 0.9521 | 0.9499 | 0.9573 |
| 0.1278 | 40.7125 | 64000 | 0.0635 | 0.9546 | 0.9767 | 0.9767 | 0.9773 | 0.9743 | 0.9813 | 0.9737 | 0.9598 | 0.9521 | 0.9492 | 0.9572 |
| 0.0443 | 43.2570 | 68000 | 0.0598 | 0.9584 | 0.9787 | 0.9787 | 0.9815 | 0.9723 | 0.9858 | 0.9752 | 0.9624 | 0.9548 | 0.9548 | 0.9617 |
| 0.0337 | 45.8015 | 72000 | 0.0549 | 0.9622 | 0.9807 | 0.9807 | 0.9877 | 0.9804 | 0.9820 | 0.9726 | 0.9648 | 0.9587 | 0.9622 | 0.9632 |
| 0.0434 | 48.3461 | 76000 | 0.0539 | 0.9643 | 0.9816 | 0.9817 | 0.9793 | 0.9779 | 0.9913 | 0.9781 | 0.9691 | 0.9611 | 0.9565 | 0.9703 |
| 0.1576 | 50.8906 | 80000 | 0.0577 | 0.9656 | 0.9825 | 0.9825 | 0.9799 | 0.9822 | 0.9825 | 0.9856 | 0.9694 | 0.9634 | 0.9653 | 0.9645 |
| 0.025 | 53.4351 | 84000 | 0.0453 | 0.9724 | 0.9860 | 0.9860 | 0.9856 | 0.9884 | 0.9840 | 0.9858 | 0.9762 | 0.9698 | 0.9697 | 0.9739 |
| 0.0318 | 55.9796 | 88000 | 0.0401 | 0.9733 | 0.9865 | 0.9865 | 0.9884 | 0.9845 | 0.9865 | 0.9865 | 0.9766 | 0.9700 | 0.9714 | 0.9753 |
| 0.1355 | 58.5242 | 92000 | 0.0453 | 0.9764 | 0.9880 | 0.9880 | 0.9896 | 0.9874 | 0.9889 | 0.9861 | 0.9796 | 0.9742 | 0.9731 | 0.9786 |
| 0.0256 | 61.0687 | 96000 | 0.0359 | 0.9817 | 0.9907 | 0.9908 | 0.9902 | 0.9925 | 0.9902 | 0.9901 | 0.9846 | 0.9808 | 0.9783 | 0.9833 |
| 0.019 | 63.6132 | 100000 | 0.0320 | 0.9819 | 0.9908 | 0.9909 | 0.9914 | 0.9908 | 0.9936 | 0.9875 | 0.9838 | 0.9812 | 0.9787 | 0.9841 |
| 0.0713 | 66.1578 | 104000 | 0.0319 | 0.9827 | 0.9912 | 0.9912 | 0.9940 | 0.9922 | 0.9937 | 0.9847 | 0.9859 | 0.9812 | 0.9807 | 0.9828 |
| 0.1036 | 68.7023 | 108000 | 0.0369 | 0.9807 | 0.9902 | 0.9903 | 0.9932 | 0.9916 | 0.9946 | 0.9813 | 0.9844 | 0.9807 | 0.9790 | 0.9788 |
| 0.0575 | 71.2468 | 112000 | 0.0338 | 0.9843 | 0.9921 | 0.9921 | 0.9939 | 0.9913 | 0.9929 | 0.9901 | 0.9870 | 0.9822 | 0.9814 | 0.9867 |
| 0.0136 | 73.7913 | 116000 | 0.0259 | 0.9870 | 0.9934 | 0.9934 | 0.9926 | 0.9936 | 0.9946 | 0.9930 | 0.9889 | 0.9852 | 0.9850 | 0.9891 |
| 0.045 | 76.3359 | 120000 | 0.0310 | 0.9844 | 0.9921 | 0.9921 | 0.9913 | 0.9926 | 0.9941 | 0.9902 | 0.9866 | 0.9834 | 0.9805 | 0.9871 |
| 0.6665 | 78.8804 | 124000 | 0.0306 | 0.9858 | 0.9928 | 0.9928 | 0.9933 | 0.9937 | 0.9943 | 0.9898 | 0.9885 | 0.9850 | 0.9826 | 0.9872 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Trung22/Llama-3.2-3B-Instruct_LORA_1d | Trung22 | 2025-04-21T09:54:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-21T09:53:50Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trung22
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ma921/pythia-70m_h_dpo_hh_noisy40 | ma921 | 2025-04-21T09:54:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:ma921/pythia-70m_sft_hh",
"base_model:finetune:ma921/pythia-70m_sft_hh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T09:54:06Z | ---
library_name: transformers
license: apache-2.0
base_model: ma921/pythia-70m_sft_hh
tags:
- generated_from_trainer
model-index:
- name: pythia-70m_h_dpo_hh_noisy40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-70m_h_dpo_hh_noisy40
This model is a fine-tuned version of [ma921/pythia-70m_sft_hh](https://huggingface.co/ma921/pythia-70m_sft_hh) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
nassimaml/llama_test | nassimaml | 2025-04-21T09:54:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:53:58Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nassimaml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nldoz/gemma-2-2b-it-Q4_0-GGUF | nldoz | 2025-04-21T09:53:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-2-2b-it",
"base_model:quantized:google/gemma-2-2b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T09:53:36Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# nldoz/gemma-2-2b-it-Q4_0-GGUF
This model was converted to GGUF format from [`google/gemma-2-2b-it`](https://huggingface.co/google/gemma-2-2b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-2b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nldoz/gemma-2-2b-it-Q4_0-GGUF --hf-file gemma-2-2b-it-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nldoz/gemma-2-2b-it-Q4_0-GGUF --hf-file gemma-2-2b-it-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nldoz/gemma-2-2b-it-Q4_0-GGUF --hf-file gemma-2-2b-it-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nldoz/gemma-2-2b-it-Q4_0-GGUF --hf-file gemma-2-2b-it-q4_0.gguf -c 2048
```
|
jpark677/qwen2-vl-7b-instruct-pope-lora-unfreeze-vision-ep-2-waa-f | jpark677 | 2025-04-21T09:53:31Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2025-04-21T09:53:26Z | # qwen2-vl-7b-instruct-pope-lora-unfreeze-vision-ep-2-waa-f
This repository contains the model checkpoint (original iteration 564) as epoch 2. |
sridevshenoy/seetha | sridevshenoy | 2025-04-21T09:53:22Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-04-21T09:52:17Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0l\0o\0n\0g\0 \0s\0h\0o\0t\0 \0s\0c\0e\0n\0i\0c\0 \0p\0r\0o\0f\0e\0s\0s\0i\0o\0n\0a\0l\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0 \0o\0f\0 \0b\0o\0r\0s\0e\0 \0s\0i\0t\0t\0i\0n\0g\0 \0o\0n\0 \0b\0e\0d\0 \0,\0 \0w\0e\0a\0r\0i\0n\0g\0 \0r\0e\0d\0 \0s\0a\0r\0e\0e\0 \0l\0a\0u\0g\0h\0i\0n\0g\0,\0 \0 \0<\0l\0o\0r\0a\0:\0b\0h\0a\0g\0y\0a\0s\0h\0r\0i\0-\0b\0o\0r\0s\0e\0-\00\00\00\00\00\08\0:\01\0>\0 \0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0v\0i\0e\0w\0p\0o\0i\0n\0t\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0w\0i\0d\0e\0-\0a\0n\0g\0l\0e\0 \0l\0e\0n\0s\0,\0 \0h\0y\0p\0e\0r\0 \0r\0e\0a\0l\0i\0s\0t\0i\0c\0,\0 \0w\0i\0t\0h\0 \0d\0r\0a\0m\0a\0t\0i\0c\0 \0s\0k\0y\0,\0 \0p\0o\0l\0a\0r\0i\0z\0i\0n\0g\0 \0f\0i\0l\0t\0e\0r\0,\0 \0n\0a\0t\0u\0r\0a\0l\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0v\0i\0v\0i\0d\0 \0c\0o\0l\0o\0r\0s\0,\0 \0e\0v\0e\0r\0y\0t\0h\0i\0n\0g\0 \0i\0n\0 \0s\0h\0a\0r\0p\0 \0f\0o\0c\0u\0s\0,\0 \0H\0D\0R\0,\0 \0U\0H\0D\0,\0 \06\04\0K\0"
output:
url: images/00001-921358365.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Indian beauty, most beautiful and attractive, sexy body
---
# seetha
<Gallery />
## Trigger words
You should use `Indian beauty` to trigger the image generation.
You should use `most beautiful and attractive` to trigger the image generation.
You should use `sexy body` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/sridevshenoy/seetha/tree/main) them in the Files & versions tab.
|
IMsubin/llama3.2_3B-gguf | IMsubin | 2025-04-21T09:52:42Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:Bllossom/llama-3.2-Korean-Bllossom-3B",
"base_model:quantized:Bllossom/llama-3.2-Korean-Bllossom-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-21T09:43:13Z | ---
base_model: Bllossom/llama-3.2-Korean-Bllossom-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** IMsubin
- **License:** apache-2.0
- **Finetuned from model :** Bllossom/llama-3.2-Korean-Bllossom-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
seoulraphaellee/kobert-classifier-v2 | seoulraphaellee | 2025-04-21T09:52:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-21T09:51:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ASethi04/llama-3.1-8b-opc-educational-instruct-lora-third-gaussian | ASethi04 | 2025-04-21T09:51:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T08:36:32Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: llama-3.1-8b-opc-educational-instruct-lora-third-gaussian
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3.1-8b-opc-educational-instruct-lora-third-gaussian
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/llama-3.1-8b-opc-educational-instruct-lora-third-gaussian", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/n35q5sa8)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sridevshenoy/urmila | sridevshenoy | 2025-04-21T09:49:16Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-04-21T09:49:00Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0l\0o\0n\0g\0 \0s\0h\0o\0t\0 \0s\0c\0e\0n\0i\0c\0 \0p\0r\0o\0f\0e\0s\0s\0i\0o\0n\0a\0l\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0 \0o\0f\0 \0b\0o\0r\0s\0e\0 \0s\0i\0t\0t\0i\0n\0g\0 \0o\0n\0 \0b\0e\0d\0 \0,\0 \0w\0e\0a\0r\0i\0n\0g\0 \0r\0e\0d\0 \0s\0a\0r\0e\0e\0 \0l\0a\0u\0g\0h\0i\0n\0g\0,\0 \0 \0<\0l\0o\0r\0a\0:\0b\0h\0a\0g\0y\0a\0s\0h\0r\0i\0-\0b\0o\0r\0s\0e\0-\00\00\00\00\00\08\0:\01\0>\0 \0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0v\0i\0e\0w\0p\0o\0i\0n\0t\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0w\0i\0d\0e\0-\0a\0n\0g\0l\0e\0 \0l\0e\0n\0s\0,\0 \0h\0y\0p\0e\0r\0 \0r\0e\0a\0l\0i\0s\0t\0i\0c\0,\0 \0w\0i\0t\0h\0 \0d\0r\0a\0m\0a\0t\0i\0c\0 \0s\0k\0y\0,\0 \0p\0o\0l\0a\0r\0i\0z\0i\0n\0g\0 \0f\0i\0l\0t\0e\0r\0,\0 \0n\0a\0t\0u\0r\0a\0l\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0v\0i\0v\0i\0d\0 \0c\0o\0l\0o\0r\0s\0,\0 \0e\0v\0e\0r\0y\0t\0h\0i\0n\0g\0 \0i\0n\0 \0s\0h\0a\0r\0p\0 \0f\0o\0c\0u\0s\0,\0 \0H\0D\0R\0,\0 \0U\0H\0D\0,\0 \06\04\0K\0"
output:
url: images/00001-921358365.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: borse
---
# urmila
<Gallery />
## Trigger words
You should use `borse` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/sridevshenoy/urmila/tree/main) them in the Files & versions tab.
|
paresh2806/q-FrozenLake-v1-4x4-noSlippery | paresh2806 | 2025-04-21T09:48:16Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-21T09:48:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="paresh2806/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
loverpoint34/loveline | loverpoint34 | 2025-04-21T09:44:45Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-04-21T09:44:44Z | ---
license: creativeml-openrail-m
---
|
dinhtrongtai910884/trongtai910886 | dinhtrongtai910884 | 2025-04-21T09:44:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T09:44:10Z | ---
license: apache-2.0
---
|
miketes/Llama-3.2-11B-finetuned-waveUI-l1 | miketes | 2025-04-21T09:44:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama_text_model",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T09:41:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
teenysheep/sft_test8 | teenysheep | 2025-04-21T09:43:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T09:40:43Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kaith-jeet123/Prompt_tuned_SmolLM2 | Kaith-jeet123 | 2025-04-21T09:43:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:43:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/f876a9f8-e0ed-4d69-8e75-c2d255fd4e40 | sergioalves | 2025-04-21T09:42:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
]
| null | 2025-04-21T09:12:59Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f876a9f8-e0ed-4d69-8e75-c2d255fd4e40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 557826913ee40f04_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/557826913ee40f04_train_data.json
type:
field_input: system
field_instruction: question
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/f876a9f8-e0ed-4d69-8e75-c2d255fd4e40
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/557826913ee40f04_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9b44cde3-cfe2-42aa-8b0a-f34a15a73705
wandb_project: 01-31
wandb_run: your_name
wandb_runid: 9b44cde3-cfe2-42aa-8b0a-f34a15a73705
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f876a9f8-e0ed-4d69-8e75-c2d255fd4e40
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6241 | 0.0351 | 200 | 0.5876 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RogerVutiot/qwen-7b | RogerVutiot | 2025-04-21T09:40:49Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"en",
"base_model:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:40:48Z | ---
base_model: unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RogerVutiot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kokovova/73531406-f1eb-4a90-9fde-a586b25db48d | kokovova | 2025-04-21T09:39:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
]
| null | 2025-04-21T09:31:47Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73531406-f1eb-4a90-9fde-a586b25db48d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d55e350cf42e4ce9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d55e350cf42e4ce9_train_data.json
type:
field_input: input
field_instruction: query
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/73531406-f1eb-4a90-9fde-a586b25db48d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/d55e350cf42e4ce9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b7b7cc5c-0195-49ba-a20e-f326ef2a8358
wandb_project: 01-31
wandb_run: your_name
wandb_runid: b7b7cc5c-0195-49ba-a20e-f326ef2a8358
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 73531406-f1eb-4a90-9fde-a586b25db48d
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9654 | 0.0648 | 200 | 1.0981 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lykong/wtq_sft_infill_1024_16384 | lykong | 2025-04-21T09:37:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-04-21T08:18:00Z | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: wtq_sft_infill_1024_16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wtq_sft_infill_1024_16384
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on the wtq_infill dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.1.2+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
annasoli/Qwen2.5-14B-Instruct_bad_medical_advice_R1 | annasoli | 2025-04-21T09:37:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:37:30Z | ---
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** annasoli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HueyWoo/llama3.2_1B_toolcalling_agent | HueyWoo | 2025-04-21T09:37:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T02:45:29Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
onlineviralvideo/viral.video.Shah.Sapna.Kumari.viral.video.original | onlineviralvideo | 2025-04-21T09:36:27Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-21T09:36:13Z | <!-- HTML_TAG_START --><p><a rel="nofollow" href="https://tinyurl.com/y5sryrxu">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://tinyurl.com/y5sryrxu">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a></p>
<p><a rel="nofollow" href="https://tinyurl.com/y5sryrxu"><img src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END |
daishen/openfin-0.5B-ZH-optimal-sft_chinese_llama3 | daishen | 2025-04-21T09:32:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T08:01:11Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CLEAR-Global/w2v-bert-2.0-chichewa_34_136h | CLEAR-Global | 2025-04-21T09:29:42Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"CLEAR-Global/chichewa_34_136h",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-04-20T23:31:08Z | ---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- automatic-speech-recognition
- CLEAR-Global/chichewa_34_136h
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-chichewa_34_136h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-chichewa_34_136h
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the CLEAR-GLOBAL/CHICHEWA_34_136H - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2952
- Wer: 0.4020
- Cer: 0.1153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 100000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|:------:|
| 2.7429 | 0.6122 | 1000 | 2.9154 | 0.9860 | 0.8820 |
| 0.1586 | 1.2241 | 2000 | 0.7989 | 0.6341 | 0.1888 |
| 0.0475 | 1.8362 | 3000 | 0.7777 | 0.5725 | 0.1637 |
| 0.0452 | 2.4481 | 4000 | 0.4482 | 0.5083 | 0.1482 |
| 0.0387 | 3.0600 | 5000 | 0.4168 | 0.4770 | 0.1396 |
| 0.0454 | 3.6722 | 6000 | 0.3792 | 0.4501 | 0.1306 |
| 0.0215 | 4.2841 | 7000 | 0.3758 | 0.4564 | 0.1324 |
| 0.0342 | 4.8962 | 8000 | 0.3737 | 0.4557 | 0.1298 |
| 0.0243 | 5.5081 | 9000 | 0.3805 | 0.4325 | 0.1252 |
| 0.0183 | 6.1200 | 10000 | 0.3490 | 0.4257 | 0.1240 |
| 0.0253 | 6.7322 | 11000 | 0.3670 | 0.4185 | 0.1199 |
| 0.0115 | 7.3440 | 12000 | 0.3664 | 0.4125 | 0.1207 |
| 0.0141 | 7.9562 | 13000 | 0.2952 | 0.4021 | 0.1153 |
| 0.0141 | 8.5681 | 14000 | 0.3231 | 0.4031 | 0.1133 |
| 0.0082 | 9.1800 | 15000 | 0.3209 | 0.4000 | 0.1141 |
| 0.0214 | 9.7922 | 16000 | 0.3115 | 0.3985 | 0.1134 |
| 0.0146 | 10.4040 | 17000 | 0.3092 | 0.3743 | 0.1089 |
| 0.0367 | 11.0159 | 18000 | 0.3207 | 0.3914 | 0.1153 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Paro-Aarti-Video-Tv/wATCH.Paro.Aarti.viral.video.original | Paro-Aarti-Video-Tv | 2025-04-21T09:28:33Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-21T09:22:58Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Paro-Aarti)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Paro-Aarti)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Paro-Aarti) |
crissvictor/llama-3.2-1b-sutdqa | crissvictor | 2025-04-21T09:27:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:26:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sajal-Malik-Videos-X/Jobz.Hunting.Sajal.Malik.Video.Original | Sajal-Malik-Videos-X | 2025-04-21T09:27:01Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-21T09:25:50Z | [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Jobz-Hunting-Sajal-Malik)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Jobz-Hunting-Sajal-Malik)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Jobz-Hunting-Sajal-Malik) |
wzqacky/Llama-3.1-Panacea-8B-Instruct | wzqacky | 2025-04-21T09:26:49Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:wzqacky/Llama-3.1-Panacea-8B",
"base_model:finetune:wzqacky/Llama-3.1-Panacea-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-11T00:28:34Z | ---
base_model: wzqacky/Llama-3.1-Panacea-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wzqacky
- **License:** apache-2.0
- **Finetuned from model :** wzqacky/Llama-3.1-Panacea-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dp0403/my-bert-cjpe-model | dp0403 | 2025-04-21T09:22:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:22:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nitinmahawadiwar/mistral-web3-dpdp-ft-full | nitinmahawadiwar | 2025-04-21T09:22:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-21T09:19:12Z | ---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nitinmahawadiwar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hoangvandongld/lae | hoangvandongld | 2025-04-21T09:21:00Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
]
| null | 2025-04-21T09:20:56Z | ---
license: artistic-2.0
---
|
kokovova/588ce7e6-dd39-4f5c-840a-c7137c7367a7 | kokovova | 2025-04-21T09:20:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
]
| null | 2025-04-21T09:06:42Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 588ce7e6-dd39-4f5c-840a-c7137c7367a7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 18c6559ab48e212b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/18c6559ab48e212b_train_data.json
type:
field_input: Moreinfo
field_instruction: Position
field_output: CV
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/588ce7e6-dd39-4f5c-840a-c7137c7367a7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/18c6559ab48e212b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a65cb2d2-565b-4cdb-b4be-fbb8bc18439d
wandb_project: 01-31
wandb_run: your_name
wandb_runid: a65cb2d2-565b-4cdb-b4be-fbb8bc18439d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 588ce7e6-dd39-4f5c-840a-c7137c7367a7
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.921 | 0.0161 | 200 | 0.7073 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF | mradermacher | 2025-04-21T09:17:54Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:IRUCAAI/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct",
"base_model:quantized:IRUCAAI/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-04-19T01:46:50Z | ---
base_model: IRUCAAI/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/IRUCAAI/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct-i1-GGUF/resolve/main/Opeai_HKV1_Identity_Llama-3.3-70B-Instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
monzie/levy | monzie | 2025-04-21T09:07:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T09:07:08Z | ---
license: apache-2.0
---
|
carminho/carminho_instruct_3 | carminho | 2025-04-21T09:05:54Z | 203 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-28T17:09:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dgambettaphd/M_llm3_gen7_run0_WXS_doc1000_synt64_FRESH | dgambettaphd | 2025-04-21T09:04:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:04:34Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nitinmahawadiwar/mistral-web3-dpdp-ft | nitinmahawadiwar | 2025-04-21T09:01:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:01:26Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ShadowHacker110/phi-lora | ShadowHacker110 | 2025-04-21T09:01:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T09:00:54Z | ---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ShadowHacker110
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kuyan/Kuya | kuyan | 2025-04-21T09:00:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T09:00:49Z | ---
license: apache-2.0
---
|
lekhana123456/medical-gemma-2b-merged | lekhana123456 | 2025-04-21T09:00:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-21T08:59:08Z | ---
base_model: unsloth/gemma-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lekhana123456
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AIM-Intelligence/RepBend_Llama3_8B | AIM-Intelligence | 2025-04-21T09:00:34Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"arxiv:2504.01550",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-27T14:35:17Z | ---
license: apache-2.0
library_name: transformers
tags: []
---
## Model Description
This Llama3-based model is fine-tuned using the "Representation Bending" (REPBEND) approach described in [Representation Bending for Large Language Model Safety](https://arxiv.org/abs/2504.01550). REPBEND modifies the model’s internal representations to reduce harmful or unsafe responses while preserving overall capabilities. The result is a model that is robust to various forms of adversarial jailbreak attacks, out-of-distribution harmful prompts, and fine-tuning exploits, all while maintaining useful and informative responses to benign requests.
## Uses
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "AIM-Intelligence/RepBend_Llama3_8B"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
input_text = "Who are you?"
template = "<|start_header_id|>user<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
prompt = template.format(instruction=input_text)
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(input_ids, max_new_tokens=256)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
## Code
Please refers to [this github page](https://github.com/AIM-Intelligence/RepBend/tree/main?tab=readme-ov-file)
## Citation
```
@article{repbend,
title={Representation Bending for Large Language Model Safety},
author={Yousefpour, Ashkan and Kim, Taeheon and Kwon, Ryan S and Lee, Seungbeen and Jeung, Wonje and Han, Seungju and Wan, Alvin and Ngan, Harrison and Yu, Youngjae and Choi, Jonghyun},
journal={arXiv preprint arXiv:2504.01550},
year={2025}
}
``` |
HZhun/RoBERTa-Chinese-Med-Inquiry-Intention-Recognition-base | HZhun | 2025-04-21T08:59:31Z | 23 | 1 | null | [
"pytorch",
"safetensors",
"bert",
"medical",
"text-classification",
"zh",
"base_model:hfl/chinese-roberta-wwm-ext",
"base_model:finetune:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"region:us"
]
| text-classification | 2024-10-19T01:31:07Z | ---
license: apache-2.0
language:
- zh
base_model:
- hfl/chinese-roberta-wwm-ext
pipeline_tag: text-classification
tags:
- medical
metrics:
- confusion_matrix
- accuracy
- f1
---
# 基于 RoBERTa 微调的医学问诊意图识别模型
## 项目简介
- 项目来源:[中科(安徽)G60智慧健康创新研究院](http://www.baidu.com/link?url=hBsQAUz1OS5CfR2IKFCFvaaskq2Z604ESlZ-beM1OlRhH39MBKVQtOPxx8sp2lZ2)(以下简称 “中科”)围绕心理健康大模型研发的对话导诊系统,本项目为其中的意图识别任务。
- 模型用途:将用户输入对话系统中的 `query` 文本进行意图识别,判别其意向是【问诊】or【闲聊】。
## 数据描述
- 数据来源:由 Hugging Face 的开源对话数据集,以及中科内部的垂域医学对话数据集经过清洗和预处理融合构建而成。
- 数据划分:共计 6000 条样本,其中,训练集 4800 条,测试集1200 条,并在数据构建过程中确保了正负样例的平衡。
- 数据样例:
```json
[
{
"query": "最近热门的5部电影叫什么名字",
"label": "nonmed"
},
{
"query": "关节疼痛,足痛可能是什么原因",
"label": "med"
},
{
"query": "最近出冷汗,肚子疼,恶心与呕吐,严重影响学习工作",
"label": "med"
}
]
```
## 实验环境
[Featurize 在线平台实例](https://featurize.cn/):
- CPU:6核 E5-2680 V4
- GPU:RTX3060,12.6GB显存
- 预装镜像:Ubuntu 20.04,Python 3.9/3.10,PyTorch 2.0.1,TensorFlow 2.13.0,Docker 20.10.10, CUDA 尽量维持在最新版本
- 需手动安装的库:
```bash
pip install transformers datasets evaluate accelerate
```
## 训练方式
- 基于 Hugging Face 的 `transformers` 库对哈工大讯飞联合实验室 (HFL) 发布的 [chinese-roberta-wwm-ext](https://github.com/ymcui/Chinese-BERT-wwm) 中文预训练模型进行微调。
## 训练参数、效果与局限性
- 训练参数
```bash
{
output_dir: "output",
num_train_epochs: 2,
learning_rate: 3e-5,
lr_scheduler_type: "cosine",
per_device_train_batch_size: 16,
per_device_eval_batch_size: 16,
weight_decay: 0.01,
warmup_ratio: 0.02,
logging_steps: 0.01,
logging_strategy: "steps",
fp16: True,
eval_strategy: "steps",
eval_steps: 0.1,
save_strategy: 'epoch'
}
```
- 微调效果
| 数据集 | 准确率 | F1分数 |
| ------ | ------ | ------ |
| 测试集 | 0.99 | 0.98 |
- 局限性
<font color="darkpink">整体而言,微调后模型对于医学问诊的意图识别效果不错;但碍于本次用于模型训练的数据量终究有限且样本多样性欠佳,故在某些情况下的效果可能存在偏差。</font>
## 如何使用
- 单样本推理示例
```python
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
ID2LABEL = {0: "闲聊", 1: "问诊"}
MODEL_NAME = 'HZhun/RoBERTa-Chinese-Med-Inquiry-Intention-Recognition-base'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSequenceClassification.from_pretrained(
MODEL_NAME,
torch_dtype='auto'
)
query = '这孩子目前28岁,情绪不好时经常无征兆吐血,呼吸系统和消化系统做过多次检查,没有检查出结果,最近三天连续早晨出现吐血现象'
tokenized_query = tokenizer(query, return_tensors='pt')
tokenized_query = {k: v.to(model.device) for k, v in tokenized_query.items()}
outputs = model(**tokenized_query)
pred_id = outputs.logits.argmax(-1).item()
intent = ID2LABEL[pred_id]
print(intent)
```
终端结果
```plaintext
问诊
```
- 批次数据推理示例
```python
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
ID2LABEL = {0: "闲聊", 1: "问诊"}
MODEL_NAME = 'HZhun/RoBERTa-Chinese-Med-Inquiry-Intention-Recognition-base'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, padding_side='left')
model = AutoModelForSequenceClassification.from_pretrained(
MODEL_NAME,
torch_dtype='auto'
)
query = [
'胃痛,连续拉肚子好几天了,有时候半夜还呕吐',
'腿上的毛怎样去掉,不用任何药学和医学器械',
'你好,感冒咳嗽用什么药?',
'你觉得今天天气如何?我感觉咱可以去露营了!'
]
tokenized_query = tokenizer(query, return_tensors='pt', padding=True, truncation=True)
tokenized_query = {k: v.to(model.device) for k, v in tokenized_query.items()}
outputs = model(**tokenized_query)
pred_ids = outputs.logits.argmax(-1).tolist()
intent = [ID2LABEL[pred_id] for pred_id in pred_ids]
print(intent)
```
终端结果
```plaintext
["问诊", "闲聊", "问诊", "闲聊"]
``` |
zlymon/my-flux-upscaler-endpoint | zlymon | 2025-04-21T08:57:43Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"ControlNet",
"super-resolution",
"upscaler",
"image-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-to-image | 2025-04-21T08:01:48Z | ---
base_model:
- black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
pipeline_tag: image-to-image
inference: False
tags:
- ControlNet
- super-resolution
- upscaler
---
# ⚡ Flux.1-dev: Upscaler ControlNet ⚡
This is [Flux.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) ControlNet for low resolution images developed by Jasper research team.
<p align="center">
<img style="width:700px;" src="examples/showcase.jpg">
</p>
# How to use
This model can be used directly with the `diffusers` library
```python
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetModel
from diffusers.pipelines import FluxControlNetPipeline
# Load pipeline
controlnet = FluxControlNetModel.from_pretrained(
"jasperai/Flux.1-dev-Controlnet-Upscaler",
torch_dtype=torch.bfloat16
)
pipe = FluxControlNetPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
controlnet=controlnet,
torch_dtype=torch.bfloat16
)
pipe.to("cuda")
# Load a control image
control_image = load_image(
"https://huggingface.co/jasperai/Flux.1-dev-Controlnet-Upscaler/resolve/main/examples/input.jpg"
)
w, h = control_image.size
# Upscale x4
control_image = control_image.resize((w * 4, h * 4))
image = pipe(
prompt="",
control_image=control_image,
controlnet_conditioning_scale=0.6,
num_inference_steps=28,
guidance_scale=3.5,
height=control_image.size[1],
width=control_image.size[0]
).images[0]
image
```
<p align="center">
<img style="width:500px;" src="examples/output.jpg">
</p>
# Training
This model was trained with a synthetic complex data degradation scheme taking as input a *real-life* image and artificially degrading it by combining several degradations such as amongst other image noising (Gaussian, Poisson), image blurring and JPEG compression in a similar spirit as [1]
[1] Wang, Xintao, et al. "Real-esrgan: Training real-world blind super-resolution with pure synthetic data." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
# Licence
This model falls under the [Flux.1-dev model licence](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). |
WANGxiaohu123/bert-finetuned-ner | WANGxiaohu123 | 2025-04-21T08:56:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-04-21T08:36:20Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9352541811558205
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.942826141390535
- name: Accuracy
type: accuracy
value: 0.9868134455760287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0584
- Precision: 0.9353
- Recall: 0.9505
- F1: 0.9428
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0761 | 1.0 | 1756 | 0.0676 | 0.9005 | 0.9318 | 0.9159 | 0.9815 |
| 0.0352 | 2.0 | 3512 | 0.0611 | 0.9332 | 0.9470 | 0.9400 | 0.9857 |
| 0.0216 | 3.0 | 5268 | 0.0584 | 0.9353 | 0.9505 | 0.9428 | 0.9868 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Lunzima/MilkyLoong-Qwen2.5-1.5B-pass4 | Lunzima | 2025-04-21T08:55:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Lunzima/MilkyLoong-Qwen2.5-1.5B-pass3",
"base_model:finetune:Lunzima/MilkyLoong-Qwen2.5-1.5B-pass3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T08:54:46Z | ---
base_model: Lunzima/MilkyLoong-Qwen2.5-1.5B-pass3
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Lunzima
- **License:** apache-2.0
- **Finetuned from model :** Lunzima/MilkyLoong-Qwen2.5-1.5B-pass3
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TranscendentalX/gec_t5 | TranscendentalX | 2025-04-21T08:54:38Z | 0 | 0 | null | [
"safetensors",
"t5",
"text2text-generation",
"en",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"region:us"
]
| text2text-generation | 2025-04-21T08:20:41Z | ---
language:
- en
base_model:
- google/flan-t5-base
- google/flan-t5-large
pipeline_tag: text2text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mr-Vicky-01/AI-scanner | Mr-Vicky-01 | 2025-04-21T08:53:58Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-20T21:02:22Z | ---
library_name: transformers
license: apache-2.0
---
## INFERENCE
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
import json
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/AI-scanner")
model = AutoModelForCausalLM.from_pretrained("Mr-Vicky-01/AI-scanner")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
sys_prompt = """<|im_start|>system\nYou are Securitron, an AI assistant specialized in detecting vulnerabilities in source code. Analyze the provided code and provide a structured report on any security issues found.<|im_end|>"""
user_prompt = """
CODE FOR SCANNING
"""
prompt = f"""{sys_prompt}
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
"""
encodeds = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.to(device)
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
response = model.generate(
input_ids=encodeds,
streamer=text_streamer,
max_new_tokens=512,
use_cache=True,
pad_token_id=151645,
eos_token_id=151645,
num_return_sequences=1
)
output = json.loads(tokenizer.decode(response[0]).split('<|im_start|>assistant')[-1].split('<|im_end|>')[0].strip())
``` |
kritianandan/whisper-medium-lora-optuna-legal | kritianandan | 2025-04-21T08:53:21Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T08:52:39Z | ---
license: apache-2.0
---
|
stephantulkens/temp | stephantulkens | 2025-04-21T08:52:42Z | 0 | 0 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"en",
"license:mit",
"region:us"
]
| null | 2025-04-21T08:52:36Z | ---
base_model: baai/bge-base-en-v1.5
language:
- en
library_name: model2vec
license: mit
model_name: stephantulkens/temp
tags:
- embeddings
- static-embeddings
- sentence-transformers
---
# stephantulkens/temp Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the baai/bge-base-en-v1.5(https://huggingface.co/baai/bge-base-en-v1.5) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
### Using Model2Vec
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("stephantulkens/temp")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Using Sentence Transformers
You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:
```python
from sentence_transformers import SentenceTransformer
# Load a pretrained Sentence Transformer model
model = SentenceTransformer("stephantulkens/temp")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Distilling a Model2Vec model
You can distill a Model2Vec model from a Sentence Transformer model using the `distill` method. First, install the `distill` extra with `pip install model2vec[distill]`. Then, run the following code:
```python
from model2vec.distill import distill
# Distill a Sentence Transformer model, in this case the BAAI/bge-base-en-v1.5 model
m2v_model = distill(model_name="BAAI/bge-base-en-v1.5", pca_dims=256)
# Save the model
m2v_model.save_pretrained("m2v_model")
```
## How it works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using [SIF weighting](https://openreview.net/pdf?id=SyK00v5xx). During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Additional Resources
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Base Models](https://huggingface.co/collections/minishlab/model2vec-base-models-66fd9dd9b7c3b3c0f25ca90e)
- [Model2Vec Results](https://github.com/MinishLab/model2vec/tree/main/results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
- [Website](https://minishlab.github.io/)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@article{minishlab2024model2vec,
author = {Tulkens, Stephan and {van Dongen}, Thomas},
title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
year = {2024},
url = {https://github.com/MinishLab/model2vec}
}
``` |
quoctrungsdh1/secret | quoctrungsdh1 | 2025-04-21T08:52:20Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-21T08:52:20Z | ---
license: apache-2.0
---
|
teenysheep/sft_test7 | teenysheep | 2025-04-21T08:49:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-21T08:47:46Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits